00:00:00.001 Started by upstream project "autotest-per-patch" build number 132007 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.125 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.258 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.269 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.281 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:05.281 > git config core.sparsecheckout # timeout=10 00:00:05.292 > git read-tree -mu HEAD # timeout=10 00:00:05.308 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:05.331 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:05.331 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:05.434 [Pipeline] Start of Pipeline 00:00:05.446 [Pipeline] library 00:00:05.447 Loading library shm_lib@master 00:00:05.447 Library shm_lib@master is cached. Copying from home. 00:00:05.459 [Pipeline] node 00:00:05.469 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.470 [Pipeline] { 00:00:05.479 [Pipeline] catchError 00:00:05.480 [Pipeline] { 00:00:05.491 [Pipeline] wrap 00:00:05.498 [Pipeline] { 00:00:05.503 [Pipeline] stage 00:00:05.504 [Pipeline] { (Prologue) 00:00:05.605 [Pipeline] echo 00:00:05.607 Node: VM-host-WFP7 00:00:05.612 [Pipeline] cleanWs 00:00:05.619 [WS-CLEANUP] Deleting project workspace... 00:00:05.619 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.626 [WS-CLEANUP] done 00:00:05.800 [Pipeline] setCustomBuildProperty 00:00:05.888 [Pipeline] httpRequest 00:00:06.266 [Pipeline] echo 00:00:06.268 Sorcerer 10.211.164.101 is alive 00:00:06.274 [Pipeline] retry 00:00:06.275 [Pipeline] { 00:00:06.284 [Pipeline] httpRequest 00:00:06.288 HttpMethod: GET 00:00:06.288 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.289 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.295 Response Code: HTTP/1.1 200 OK 00:00:06.295 Success: Status code 200 is in the accepted range: 200,404 00:00:06.296 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.028 [Pipeline] } 00:00:09.045 [Pipeline] // retry 00:00:09.052 [Pipeline] sh 00:00:09.340 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:09.354 [Pipeline] httpRequest 00:00:10.052 [Pipeline] echo 00:00:10.054 Sorcerer 10.211.164.101 is alive 00:00:10.064 [Pipeline] retry 00:00:10.066 [Pipeline] { 00:00:10.080 [Pipeline] httpRequest 00:00:10.085 HttpMethod: GET 00:00:10.086 URL: http://10.211.164.101/packages/spdk_3edf9f121d603c57c359c7ad9564988550567792.tar.gz 00:00:10.087 Sending request to url: http://10.211.164.101/packages/spdk_3edf9f121d603c57c359c7ad9564988550567792.tar.gz 00:00:10.108 Response Code: HTTP/1.1 200 OK 00:00:10.108 Success: Status code 200 is in the accepted range: 200,404 00:00:10.109 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_3edf9f121d603c57c359c7ad9564988550567792.tar.gz 00:00:56.087 [Pipeline] } 00:00:56.106 [Pipeline] // retry 00:00:56.114 [Pipeline] sh 00:00:56.399 + tar --no-same-owner -xf spdk_3edf9f121d603c57c359c7ad9564988550567792.tar.gz 00:00:58.972 [Pipeline] sh 00:00:59.256 + git -C spdk log --oneline -n5 00:00:59.256 3edf9f121 bdev/nvme: Fix race bug between clear_pending_resets and reset_ctrlr_complete() 00:00:59.257 a90f7d980 bdev/nvme: Relocate bdev_nvme_reset_ctrlr_complete() 00:00:59.257 6eb2657ac bdev/nvme: Inline bdev_nvme_reset_ctrlr() into _bdev_nvme_reset_io() 00:00:59.257 4bb6b093c bdev/nvme: Inline nvme_ctrlr_op(CTRLR_OP_RESET) into _bdev_nvme_reset_io() 00:00:59.257 517e85fc5 bdev/nvme: Factor out operations under mutex from bdev_nvme_reset_ctrlr() 00:00:59.278 [Pipeline] writeFile 00:00:59.293 [Pipeline] sh 00:00:59.579 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.591 [Pipeline] sh 00:00:59.876 + cat autorun-spdk.conf 00:00:59.876 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.876 SPDK_RUN_ASAN=1 00:00:59.876 SPDK_RUN_UBSAN=1 00:00:59.876 SPDK_TEST_RAID=1 00:00:59.876 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:59.884 RUN_NIGHTLY=0 00:00:59.886 [Pipeline] } 00:00:59.899 [Pipeline] // stage 00:00:59.914 [Pipeline] stage 00:00:59.916 [Pipeline] { (Run VM) 00:00:59.929 [Pipeline] sh 00:01:00.213 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.213 + echo 'Start stage prepare_nvme.sh' 00:01:00.213 Start stage prepare_nvme.sh 00:01:00.213 + [[ -n 7 ]] 00:01:00.213 + disk_prefix=ex7 00:01:00.213 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:00.213 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:00.213 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:00.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.213 ++ SPDK_RUN_ASAN=1 00:01:00.213 ++ SPDK_RUN_UBSAN=1 00:01:00.213 ++ SPDK_TEST_RAID=1 00:01:00.213 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.213 ++ RUN_NIGHTLY=0 00:01:00.213 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:00.213 + nvme_files=() 00:01:00.213 + declare -A nvme_files 00:01:00.213 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.213 + nvme_files['nvme.img']=5G 00:01:00.213 + nvme_files['nvme-cmb.img']=5G 00:01:00.213 + nvme_files['nvme-multi0.img']=4G 00:01:00.213 + nvme_files['nvme-multi1.img']=4G 00:01:00.213 + nvme_files['nvme-multi2.img']=4G 00:01:00.213 + nvme_files['nvme-openstack.img']=8G 00:01:00.213 + nvme_files['nvme-zns.img']=5G 00:01:00.213 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.213 + (( SPDK_TEST_FTL == 1 )) 00:01:00.213 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.213 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:00.213 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.213 + for nvme in "${!nvme_files[@]}" 00:01:00.213 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:01.153 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:01.153 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:01.153 + echo 'End stage prepare_nvme.sh' 00:01:01.153 End stage prepare_nvme.sh 00:01:01.166 [Pipeline] sh 00:01:01.451 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.451 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:01.451 00:01:01.451 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:01.451 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:01.451 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:01.451 HELP=0 00:01:01.451 DRY_RUN=0 00:01:01.451 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:01.451 NVME_DISKS_TYPE=nvme,nvme, 00:01:01.451 NVME_AUTO_CREATE=0 00:01:01.451 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:01.451 NVME_CMB=,, 00:01:01.451 NVME_PMR=,, 00:01:01.451 NVME_ZNS=,, 00:01:01.451 NVME_MS=,, 00:01:01.451 NVME_FDP=,, 00:01:01.451 SPDK_VAGRANT_DISTRO=fedora39 00:01:01.451 SPDK_VAGRANT_VMCPU=10 00:01:01.451 SPDK_VAGRANT_VMRAM=12288 00:01:01.451 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.451 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.451 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.451 SPDK_OPENSTACK_NETWORK=0 00:01:01.451 VAGRANT_PACKAGE_BOX=0 00:01:01.451 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.451 FORCE_DISTRO=true 00:01:01.451 VAGRANT_BOX_VERSION= 00:01:01.451 EXTRA_VAGRANTFILES= 00:01:01.451 NIC_MODEL=virtio 00:01:01.451 00:01:01.451 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:01.452 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:03.359 Bringing machine 'default' up with 'libvirt' provider... 00:01:03.929 ==> default: Creating image (snapshot of base box volume). 00:01:03.929 ==> default: Creating domain with the following settings... 00:01:03.929 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730720009_853621fc3aa9215fda0e 00:01:03.929 ==> default: -- Domain type: kvm 00:01:03.929 ==> default: -- Cpus: 10 00:01:03.929 ==> default: -- Feature: acpi 00:01:03.929 ==> default: -- Feature: apic 00:01:03.929 ==> default: -- Feature: pae 00:01:03.929 ==> default: -- Memory: 12288M 00:01:03.929 ==> default: -- Memory Backing: hugepages: 00:01:03.929 ==> default: -- Management MAC: 00:01:03.929 ==> default: -- Loader: 00:01:03.929 ==> default: -- Nvram: 00:01:03.929 ==> default: -- Base box: spdk/fedora39 00:01:03.929 ==> default: -- Storage pool: default 00:01:03.929 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730720009_853621fc3aa9215fda0e.img (20G) 00:01:03.929 ==> default: -- Volume Cache: default 00:01:03.929 ==> default: -- Kernel: 00:01:03.929 ==> default: -- Initrd: 00:01:03.929 ==> default: -- Graphics Type: vnc 00:01:03.929 ==> default: -- Graphics Port: -1 00:01:03.929 ==> default: -- Graphics IP: 127.0.0.1 00:01:03.929 ==> default: -- Graphics Password: Not defined 00:01:03.929 ==> default: -- Video Type: cirrus 00:01:03.929 ==> default: -- Video VRAM: 9216 00:01:03.929 ==> default: -- Sound Type: 00:01:03.929 ==> default: -- Keymap: en-us 00:01:03.929 ==> default: -- TPM Path: 00:01:03.929 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:03.929 ==> default: -- Command line args: 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:03.929 ==> default: -> value=-drive, 00:01:03.929 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:03.929 ==> default: -> value=-drive, 00:01:03.929 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.929 ==> default: -> value=-drive, 00:01:03.929 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:03.929 ==> default: -> value=-drive, 00:01:03.929 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:03.929 ==> default: -> value=-device, 00:01:03.929 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.188 ==> default: Creating shared folders metadata... 00:01:04.188 ==> default: Starting domain. 00:01:05.569 ==> default: Waiting for domain to get an IP address... 00:01:23.704 ==> default: Waiting for SSH to become available... 00:01:23.704 ==> default: Configuring and enabling network interfaces... 00:01:29.026 default: SSH address: 192.168.121.143:22 00:01:29.026 default: SSH username: vagrant 00:01:29.026 default: SSH auth method: private key 00:01:30.934 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:39.096 ==> default: Mounting SSHFS shared folder... 00:01:41.641 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.641 ==> default: Checking Mount.. 00:01:43.018 ==> default: Folder Successfully Mounted! 00:01:43.018 ==> default: Running provisioner: file... 00:01:44.400 default: ~/.gitconfig => .gitconfig 00:01:44.969 00:01:44.969 SUCCESS! 00:01:44.969 00:01:44.969 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:44.969 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.969 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:44.969 00:01:44.978 [Pipeline] } 00:01:44.993 [Pipeline] // stage 00:01:45.001 [Pipeline] dir 00:01:45.002 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:45.003 [Pipeline] { 00:01:45.015 [Pipeline] catchError 00:01:45.017 [Pipeline] { 00:01:45.029 [Pipeline] sh 00:01:45.311 + vagrant ssh-config --host vagrant 00:01:45.311 + sed -ne /^Host/,$p 00:01:45.311 + tee ssh_conf 00:01:47.849 Host vagrant 00:01:47.849 HostName 192.168.121.143 00:01:47.849 User vagrant 00:01:47.849 Port 22 00:01:47.849 UserKnownHostsFile /dev/null 00:01:47.849 StrictHostKeyChecking no 00:01:47.849 PasswordAuthentication no 00:01:47.849 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:47.849 IdentitiesOnly yes 00:01:47.849 LogLevel FATAL 00:01:47.849 ForwardAgent yes 00:01:47.849 ForwardX11 yes 00:01:47.849 00:01:47.863 [Pipeline] withEnv 00:01:47.865 [Pipeline] { 00:01:47.879 [Pipeline] sh 00:01:48.163 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.163 source /etc/os-release 00:01:48.163 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.163 # Minimal, systemd-like check. 00:01:48.163 if [[ -e /.dockerenv ]]; then 00:01:48.163 # Clear garbage from the node's name: 00:01:48.163 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.163 # $HOSTNAME is the actual container id 00:01:48.163 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.163 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.163 # We can assume this is a mount from a host where container is running, 00:01:48.163 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.163 container="$(< /etc/hostname) ($agent)" 00:01:48.163 else 00:01:48.163 # Fallback 00:01:48.163 container=$agent 00:01:48.163 fi 00:01:48.163 fi 00:01:48.163 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.163 00:01:48.435 [Pipeline] } 00:01:48.451 [Pipeline] // withEnv 00:01:48.459 [Pipeline] setCustomBuildProperty 00:01:48.473 [Pipeline] stage 00:01:48.475 [Pipeline] { (Tests) 00:01:48.492 [Pipeline] sh 00:01:48.775 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.047 [Pipeline] sh 00:01:49.330 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.606 [Pipeline] timeout 00:01:49.606 Timeout set to expire in 1 hr 30 min 00:01:49.608 [Pipeline] { 00:01:49.622 [Pipeline] sh 00:01:49.906 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.474 HEAD is now at 3edf9f121 bdev/nvme: Fix race bug between clear_pending_resets and reset_ctrlr_complete() 00:01:50.487 [Pipeline] sh 00:01:50.771 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.073 [Pipeline] sh 00:01:51.356 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.632 [Pipeline] sh 00:01:51.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:52.174 ++ readlink -f spdk_repo 00:01:52.174 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.174 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.174 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.174 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.174 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.174 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.174 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.174 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:52.174 + cd /home/vagrant/spdk_repo 00:01:52.174 + source /etc/os-release 00:01:52.174 ++ NAME='Fedora Linux' 00:01:52.174 ++ VERSION='39 (Cloud Edition)' 00:01:52.174 ++ ID=fedora 00:01:52.174 ++ VERSION_ID=39 00:01:52.174 ++ VERSION_CODENAME= 00:01:52.174 ++ PLATFORM_ID=platform:f39 00:01:52.174 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.174 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.174 ++ LOGO=fedora-logo-icon 00:01:52.174 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.174 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.174 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.174 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.174 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.174 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.174 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.174 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.174 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.174 ++ SUPPORT_END=2024-11-12 00:01:52.174 ++ VARIANT='Cloud Edition' 00:01:52.174 ++ VARIANT_ID=cloud 00:01:52.174 + uname -a 00:01:52.174 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.174 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.743 Hugepages 00:01:52.743 node hugesize free / total 00:01:52.743 node0 1048576kB 0 / 0 00:01:52.743 node0 2048kB 0 / 0 00:01:52.743 00:01:52.743 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.743 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.743 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.743 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:52.743 + rm -f /tmp/spdk-ld-path 00:01:52.743 + source autorun-spdk.conf 00:01:52.743 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.743 ++ SPDK_RUN_ASAN=1 00:01:52.743 ++ SPDK_RUN_UBSAN=1 00:01:52.743 ++ SPDK_TEST_RAID=1 00:01:52.743 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.743 ++ RUN_NIGHTLY=0 00:01:52.743 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.743 + [[ -n '' ]] 00:01:52.743 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:53.002 + for M in /var/spdk/build-*-manifest.txt 00:01:53.002 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.002 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.002 + for M in /var/spdk/build-*-manifest.txt 00:01:53.002 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.002 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.002 + for M in /var/spdk/build-*-manifest.txt 00:01:53.002 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.002 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.002 ++ uname 00:01:53.002 + [[ Linux == \L\i\n\u\x ]] 00:01:53.002 + sudo dmesg -T 00:01:53.002 + sudo dmesg --clear 00:01:53.002 + dmesg_pid=5429 00:01:53.002 + [[ Fedora Linux == FreeBSD ]] 00:01:53.002 + sudo dmesg -Tw 00:01:53.002 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.002 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.002 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.002 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.002 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.002 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.002 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.002 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.002 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.002 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.002 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.002 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.002 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.002 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.002 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.002 11:34:18 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:53.002 11:34:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.002 11:34:18 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:53.002 11:34:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:53.002 11:34:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.262 11:34:18 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:53.262 11:34:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.262 11:34:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:53.262 11:34:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.262 11:34:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.262 11:34:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.262 11:34:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.262 11:34:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.262 11:34:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.262 11:34:18 -- paths/export.sh@5 -- $ export PATH 00:01:53.262 11:34:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.262 11:34:18 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.262 11:34:18 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:53.262 11:34:18 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730720058.XXXXXX 00:01:53.262 11:34:18 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730720058.RS7wxi 00:01:53.262 11:34:18 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:53.262 11:34:18 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:53.262 11:34:18 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:53.262 11:34:18 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.262 11:34:18 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.262 11:34:18 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:53.262 11:34:18 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:53.262 11:34:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.262 11:34:18 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:53.262 11:34:18 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:53.262 11:34:18 -- pm/common@17 -- $ local monitor 00:01:53.262 11:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.262 11:34:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.262 11:34:18 -- pm/common@25 -- $ sleep 1 00:01:53.262 11:34:18 -- pm/common@21 -- $ date +%s 00:01:53.262 11:34:18 -- pm/common@21 -- $ date +%s 00:01:53.262 11:34:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730720058 00:01:53.263 11:34:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730720058 00:01:53.263 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730720058_collect-cpu-load.pm.log 00:01:53.263 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730720058_collect-vmstat.pm.log 00:01:54.200 11:34:19 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:54.200 11:34:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.200 11:34:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.200 11:34:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.200 11:34:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.200 Mon Nov 4 11:34:19 AM UTC 2024 00:01:54.200 11:34:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.200 v25.01-pre-150-g3edf9f121 00:01:54.200 11:34:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.200 11:34:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.200 11:34:19 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:54.200 11:34:19 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:54.200 11:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.200 ************************************ 00:01:54.200 START TEST asan 00:01:54.200 ************************************ 00:01:54.200 using asan 00:01:54.200 11:34:19 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:54.200 00:01:54.200 real 0m0.001s 00:01:54.200 user 0m0.001s 00:01:54.200 sys 0m0.000s 00:01:54.200 11:34:19 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:54.200 11:34:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.200 ************************************ 00:01:54.200 END TEST asan 00:01:54.200 ************************************ 00:01:54.460 11:34:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.460 11:34:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.460 11:34:19 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:54.460 11:34:19 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:54.460 11:34:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.460 ************************************ 00:01:54.460 START TEST ubsan 00:01:54.460 ************************************ 00:01:54.460 using ubsan 00:01:54.460 11:34:19 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:54.460 00:01:54.460 real 0m0.000s 00:01:54.460 user 0m0.000s 00:01:54.460 sys 0m0.000s 00:01:54.460 11:34:19 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:54.460 11:34:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.460 ************************************ 00:01:54.460 END TEST ubsan 00:01:54.460 ************************************ 00:01:54.460 11:34:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.460 11:34:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.460 11:34:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:54.460 11:34:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:54.460 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:54.460 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:55.031 Using 'verbs' RDMA provider 00:02:11.320 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.421 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.421 Creating mk/config.mk...done. 00:02:29.421 Creating mk/cc.flags.mk...done. 00:02:29.421 Type 'make' to build. 00:02:29.421 11:34:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:29.422 11:34:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:29.422 11:34:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:29.422 11:34:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.422 ************************************ 00:02:29.422 START TEST make 00:02:29.422 ************************************ 00:02:29.422 11:34:52 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:29.422 make[1]: Nothing to be done for 'all'. 00:02:39.405 The Meson build system 00:02:39.405 Version: 1.5.0 00:02:39.405 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:39.405 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:39.405 Build type: native build 00:02:39.405 Program cat found: YES (/usr/bin/cat) 00:02:39.405 Project name: DPDK 00:02:39.405 Project version: 24.03.0 00:02:39.405 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.405 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.405 Host machine cpu family: x86_64 00:02:39.405 Host machine cpu: x86_64 00:02:39.405 Message: ## Building in Developer Mode ## 00:02:39.405 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.405 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:39.405 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.405 Program python3 found: YES (/usr/bin/python3) 00:02:39.405 Program cat found: YES (/usr/bin/cat) 00:02:39.405 Compiler for C supports arguments -march=native: YES 00:02:39.405 Checking for size of "void *" : 8 00:02:39.405 Checking for size of "void *" : 8 (cached) 00:02:39.405 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:39.405 Library m found: YES 00:02:39.405 Library numa found: YES 00:02:39.405 Has header "numaif.h" : YES 00:02:39.405 Library fdt found: NO 00:02:39.405 Library execinfo found: NO 00:02:39.405 Has header "execinfo.h" : YES 00:02:39.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.405 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.405 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.405 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.405 Run-time dependency openssl found: YES 3.1.1 00:02:39.405 Run-time dependency libpcap found: YES 1.10.4 00:02:39.405 Has header "pcap.h" with dependency libpcap: YES 00:02:39.405 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.405 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.405 Compiler for C supports arguments -Wformat: YES 00:02:39.405 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.405 Compiler for C supports arguments -Wformat-security: NO 00:02:39.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.405 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.405 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.405 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.405 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.406 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.406 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.406 Compiler for C supports arguments -Wundef: YES 00:02:39.406 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.406 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.406 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.406 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.406 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.406 Program objdump found: YES (/usr/bin/objdump) 00:02:39.406 Compiler for C supports arguments -mavx512f: YES 00:02:39.406 Checking if "AVX512 checking" compiles: YES 00:02:39.406 Fetching value of define "__SSE4_2__" : 1 00:02:39.406 Fetching value of define "__AES__" : 1 00:02:39.406 Fetching value of define "__AVX__" : 1 00:02:39.406 Fetching value of define "__AVX2__" : 1 00:02:39.406 Fetching value of define "__AVX512BW__" : 1 00:02:39.406 Fetching value of define "__AVX512CD__" : 1 00:02:39.406 Fetching value of define "__AVX512DQ__" : 1 00:02:39.406 Fetching value of define "__AVX512F__" : 1 00:02:39.406 Fetching value of define "__AVX512VL__" : 1 00:02:39.406 Fetching value of define "__PCLMUL__" : 1 00:02:39.406 Fetching value of define "__RDRND__" : 1 00:02:39.406 Fetching value of define "__RDSEED__" : 1 00:02:39.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:39.406 Fetching value of define "__znver1__" : (undefined) 00:02:39.406 Fetching value of define "__znver2__" : (undefined) 00:02:39.406 Fetching value of define "__znver3__" : (undefined) 00:02:39.406 Fetching value of define "__znver4__" : (undefined) 00:02:39.406 Library asan found: YES 00:02:39.406 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.406 Message: lib/log: Defining dependency "log" 00:02:39.406 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.406 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.406 Library rt found: YES 00:02:39.406 Checking for function "getentropy" : NO 00:02:39.406 Message: lib/eal: Defining dependency "eal" 00:02:39.406 Message: lib/ring: Defining dependency "ring" 00:02:39.406 Message: lib/rcu: Defining dependency "rcu" 00:02:39.406 Message: lib/mempool: Defining dependency "mempool" 00:02:39.406 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.406 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.406 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:39.406 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:39.406 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:39.406 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:39.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:39.406 Compiler for C supports arguments -mpclmul: YES 00:02:39.406 Compiler for C supports arguments -maes: YES 00:02:39.406 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.406 Compiler for C supports arguments -mavx512bw: YES 00:02:39.406 Compiler for C supports arguments -mavx512dq: YES 00:02:39.406 Compiler for C supports arguments -mavx512vl: YES 00:02:39.406 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.406 Compiler for C supports arguments -mavx2: YES 00:02:39.406 Compiler for C supports arguments -mavx: YES 00:02:39.406 Message: lib/net: Defining dependency "net" 00:02:39.406 Message: lib/meter: Defining dependency "meter" 00:02:39.406 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.406 Message: lib/pci: Defining dependency "pci" 00:02:39.406 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.406 Message: lib/hash: Defining dependency "hash" 00:02:39.406 Message: lib/timer: Defining dependency "timer" 00:02:39.406 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.406 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.406 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.406 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.406 Message: lib/power: Defining dependency "power" 00:02:39.406 Message: lib/reorder: Defining dependency "reorder" 00:02:39.406 Message: lib/security: Defining dependency "security" 00:02:39.406 Has header "linux/userfaultfd.h" : YES 00:02:39.406 Has header "linux/vduse.h" : YES 00:02:39.406 Message: lib/vhost: Defining dependency "vhost" 00:02:39.406 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.406 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.406 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.406 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.406 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:39.406 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:39.406 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:39.406 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:39.406 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:39.406 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:39.406 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.406 Configuring doxy-api-html.conf using configuration 00:02:39.406 Configuring doxy-api-man.conf using configuration 00:02:39.406 Program mandb found: YES (/usr/bin/mandb) 00:02:39.406 Program sphinx-build found: NO 00:02:39.406 Configuring rte_build_config.h using configuration 00:02:39.406 Message: 00:02:39.406 ================= 00:02:39.406 Applications Enabled 00:02:39.406 ================= 00:02:39.406 00:02:39.406 apps: 00:02:39.406 00:02:39.406 00:02:39.406 Message: 00:02:39.406 ================= 00:02:39.406 Libraries Enabled 00:02:39.406 ================= 00:02:39.406 00:02:39.406 libs: 00:02:39.406 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.406 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:39.406 cryptodev, dmadev, power, reorder, security, vhost, 00:02:39.406 00:02:39.406 Message: 00:02:39.406 =============== 00:02:39.406 Drivers Enabled 00:02:39.406 =============== 00:02:39.406 00:02:39.406 common: 00:02:39.406 00:02:39.406 bus: 00:02:39.406 pci, vdev, 00:02:39.406 mempool: 00:02:39.406 ring, 00:02:39.406 dma: 00:02:39.406 00:02:39.406 net: 00:02:39.406 00:02:39.406 crypto: 00:02:39.406 00:02:39.406 compress: 00:02:39.406 00:02:39.406 vdpa: 00:02:39.406 00:02:39.406 00:02:39.406 Message: 00:02:39.406 ================= 00:02:39.406 Content Skipped 00:02:39.406 ================= 00:02:39.406 00:02:39.406 apps: 00:02:39.406 dumpcap: explicitly disabled via build config 00:02:39.406 graph: explicitly disabled via build config 00:02:39.406 pdump: explicitly disabled via build config 00:02:39.406 proc-info: explicitly disabled via build config 00:02:39.406 test-acl: explicitly disabled via build config 00:02:39.406 test-bbdev: explicitly disabled via build config 00:02:39.406 test-cmdline: explicitly disabled via build config 00:02:39.406 test-compress-perf: explicitly disabled via build config 00:02:39.406 test-crypto-perf: explicitly disabled via build config 00:02:39.406 test-dma-perf: explicitly disabled via build config 00:02:39.406 test-eventdev: explicitly disabled via build config 00:02:39.406 test-fib: explicitly disabled via build config 00:02:39.406 test-flow-perf: explicitly disabled via build config 00:02:39.406 test-gpudev: explicitly disabled via build config 00:02:39.406 test-mldev: explicitly disabled via build config 00:02:39.406 test-pipeline: explicitly disabled via build config 00:02:39.406 test-pmd: explicitly disabled via build config 00:02:39.406 test-regex: explicitly disabled via build config 00:02:39.406 test-sad: explicitly disabled via build config 00:02:39.406 test-security-perf: explicitly disabled via build config 00:02:39.406 00:02:39.406 libs: 00:02:39.406 argparse: explicitly disabled via build config 00:02:39.406 metrics: explicitly disabled via build config 00:02:39.406 acl: explicitly disabled via build config 00:02:39.406 bbdev: explicitly disabled via build config 00:02:39.406 bitratestats: explicitly disabled via build config 00:02:39.406 bpf: explicitly disabled via build config 00:02:39.406 cfgfile: explicitly disabled via build config 00:02:39.406 distributor: explicitly disabled via build config 00:02:39.406 efd: explicitly disabled via build config 00:02:39.406 eventdev: explicitly disabled via build config 00:02:39.406 dispatcher: explicitly disabled via build config 00:02:39.406 gpudev: explicitly disabled via build config 00:02:39.406 gro: explicitly disabled via build config 00:02:39.406 gso: explicitly disabled via build config 00:02:39.406 ip_frag: explicitly disabled via build config 00:02:39.406 jobstats: explicitly disabled via build config 00:02:39.406 latencystats: explicitly disabled via build config 00:02:39.406 lpm: explicitly disabled via build config 00:02:39.406 member: explicitly disabled via build config 00:02:39.406 pcapng: explicitly disabled via build config 00:02:39.406 rawdev: explicitly disabled via build config 00:02:39.406 regexdev: explicitly disabled via build config 00:02:39.406 mldev: explicitly disabled via build config 00:02:39.406 rib: explicitly disabled via build config 00:02:39.406 sched: explicitly disabled via build config 00:02:39.406 stack: explicitly disabled via build config 00:02:39.406 ipsec: explicitly disabled via build config 00:02:39.406 pdcp: explicitly disabled via build config 00:02:39.406 fib: explicitly disabled via build config 00:02:39.406 port: explicitly disabled via build config 00:02:39.406 pdump: explicitly disabled via build config 00:02:39.406 table: explicitly disabled via build config 00:02:39.406 pipeline: explicitly disabled via build config 00:02:39.406 graph: explicitly disabled via build config 00:02:39.406 node: explicitly disabled via build config 00:02:39.406 00:02:39.406 drivers: 00:02:39.406 common/cpt: not in enabled drivers build config 00:02:39.406 common/dpaax: not in enabled drivers build config 00:02:39.406 common/iavf: not in enabled drivers build config 00:02:39.406 common/idpf: not in enabled drivers build config 00:02:39.406 common/ionic: not in enabled drivers build config 00:02:39.406 common/mvep: not in enabled drivers build config 00:02:39.406 common/octeontx: not in enabled drivers build config 00:02:39.406 bus/auxiliary: not in enabled drivers build config 00:02:39.406 bus/cdx: not in enabled drivers build config 00:02:39.406 bus/dpaa: not in enabled drivers build config 00:02:39.407 bus/fslmc: not in enabled drivers build config 00:02:39.407 bus/ifpga: not in enabled drivers build config 00:02:39.407 bus/platform: not in enabled drivers build config 00:02:39.407 bus/uacce: not in enabled drivers build config 00:02:39.407 bus/vmbus: not in enabled drivers build config 00:02:39.407 common/cnxk: not in enabled drivers build config 00:02:39.407 common/mlx5: not in enabled drivers build config 00:02:39.407 common/nfp: not in enabled drivers build config 00:02:39.407 common/nitrox: not in enabled drivers build config 00:02:39.407 common/qat: not in enabled drivers build config 00:02:39.407 common/sfc_efx: not in enabled drivers build config 00:02:39.407 mempool/bucket: not in enabled drivers build config 00:02:39.407 mempool/cnxk: not in enabled drivers build config 00:02:39.407 mempool/dpaa: not in enabled drivers build config 00:02:39.407 mempool/dpaa2: not in enabled drivers build config 00:02:39.407 mempool/octeontx: not in enabled drivers build config 00:02:39.407 mempool/stack: not in enabled drivers build config 00:02:39.407 dma/cnxk: not in enabled drivers build config 00:02:39.407 dma/dpaa: not in enabled drivers build config 00:02:39.407 dma/dpaa2: not in enabled drivers build config 00:02:39.407 dma/hisilicon: not in enabled drivers build config 00:02:39.407 dma/idxd: not in enabled drivers build config 00:02:39.407 dma/ioat: not in enabled drivers build config 00:02:39.407 dma/skeleton: not in enabled drivers build config 00:02:39.407 net/af_packet: not in enabled drivers build config 00:02:39.407 net/af_xdp: not in enabled drivers build config 00:02:39.407 net/ark: not in enabled drivers build config 00:02:39.407 net/atlantic: not in enabled drivers build config 00:02:39.407 net/avp: not in enabled drivers build config 00:02:39.407 net/axgbe: not in enabled drivers build config 00:02:39.407 net/bnx2x: not in enabled drivers build config 00:02:39.407 net/bnxt: not in enabled drivers build config 00:02:39.407 net/bonding: not in enabled drivers build config 00:02:39.407 net/cnxk: not in enabled drivers build config 00:02:39.407 net/cpfl: not in enabled drivers build config 00:02:39.407 net/cxgbe: not in enabled drivers build config 00:02:39.407 net/dpaa: not in enabled drivers build config 00:02:39.407 net/dpaa2: not in enabled drivers build config 00:02:39.407 net/e1000: not in enabled drivers build config 00:02:39.407 net/ena: not in enabled drivers build config 00:02:39.407 net/enetc: not in enabled drivers build config 00:02:39.407 net/enetfec: not in enabled drivers build config 00:02:39.407 net/enic: not in enabled drivers build config 00:02:39.407 net/failsafe: not in enabled drivers build config 00:02:39.407 net/fm10k: not in enabled drivers build config 00:02:39.407 net/gve: not in enabled drivers build config 00:02:39.407 net/hinic: not in enabled drivers build config 00:02:39.407 net/hns3: not in enabled drivers build config 00:02:39.407 net/i40e: not in enabled drivers build config 00:02:39.407 net/iavf: not in enabled drivers build config 00:02:39.407 net/ice: not in enabled drivers build config 00:02:39.407 net/idpf: not in enabled drivers build config 00:02:39.407 net/igc: not in enabled drivers build config 00:02:39.407 net/ionic: not in enabled drivers build config 00:02:39.407 net/ipn3ke: not in enabled drivers build config 00:02:39.407 net/ixgbe: not in enabled drivers build config 00:02:39.407 net/mana: not in enabled drivers build config 00:02:39.407 net/memif: not in enabled drivers build config 00:02:39.407 net/mlx4: not in enabled drivers build config 00:02:39.407 net/mlx5: not in enabled drivers build config 00:02:39.407 net/mvneta: not in enabled drivers build config 00:02:39.407 net/mvpp2: not in enabled drivers build config 00:02:39.407 net/netvsc: not in enabled drivers build config 00:02:39.407 net/nfb: not in enabled drivers build config 00:02:39.407 net/nfp: not in enabled drivers build config 00:02:39.407 net/ngbe: not in enabled drivers build config 00:02:39.407 net/null: not in enabled drivers build config 00:02:39.407 net/octeontx: not in enabled drivers build config 00:02:39.407 net/octeon_ep: not in enabled drivers build config 00:02:39.407 net/pcap: not in enabled drivers build config 00:02:39.407 net/pfe: not in enabled drivers build config 00:02:39.407 net/qede: not in enabled drivers build config 00:02:39.407 net/ring: not in enabled drivers build config 00:02:39.407 net/sfc: not in enabled drivers build config 00:02:39.407 net/softnic: not in enabled drivers build config 00:02:39.407 net/tap: not in enabled drivers build config 00:02:39.407 net/thunderx: not in enabled drivers build config 00:02:39.407 net/txgbe: not in enabled drivers build config 00:02:39.407 net/vdev_netvsc: not in enabled drivers build config 00:02:39.407 net/vhost: not in enabled drivers build config 00:02:39.407 net/virtio: not in enabled drivers build config 00:02:39.407 net/vmxnet3: not in enabled drivers build config 00:02:39.407 raw/*: missing internal dependency, "rawdev" 00:02:39.407 crypto/armv8: not in enabled drivers build config 00:02:39.407 crypto/bcmfs: not in enabled drivers build config 00:02:39.407 crypto/caam_jr: not in enabled drivers build config 00:02:39.407 crypto/ccp: not in enabled drivers build config 00:02:39.407 crypto/cnxk: not in enabled drivers build config 00:02:39.407 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.407 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.407 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.407 crypto/mlx5: not in enabled drivers build config 00:02:39.407 crypto/mvsam: not in enabled drivers build config 00:02:39.407 crypto/nitrox: not in enabled drivers build config 00:02:39.407 crypto/null: not in enabled drivers build config 00:02:39.407 crypto/octeontx: not in enabled drivers build config 00:02:39.407 crypto/openssl: not in enabled drivers build config 00:02:39.407 crypto/scheduler: not in enabled drivers build config 00:02:39.407 crypto/uadk: not in enabled drivers build config 00:02:39.407 crypto/virtio: not in enabled drivers build config 00:02:39.407 compress/isal: not in enabled drivers build config 00:02:39.407 compress/mlx5: not in enabled drivers build config 00:02:39.407 compress/nitrox: not in enabled drivers build config 00:02:39.407 compress/octeontx: not in enabled drivers build config 00:02:39.407 compress/zlib: not in enabled drivers build config 00:02:39.407 regex/*: missing internal dependency, "regexdev" 00:02:39.407 ml/*: missing internal dependency, "mldev" 00:02:39.407 vdpa/ifc: not in enabled drivers build config 00:02:39.407 vdpa/mlx5: not in enabled drivers build config 00:02:39.407 vdpa/nfp: not in enabled drivers build config 00:02:39.407 vdpa/sfc: not in enabled drivers build config 00:02:39.407 event/*: missing internal dependency, "eventdev" 00:02:39.407 baseband/*: missing internal dependency, "bbdev" 00:02:39.407 gpu/*: missing internal dependency, "gpudev" 00:02:39.407 00:02:39.407 00:02:39.407 Build targets in project: 85 00:02:39.407 00:02:39.407 DPDK 24.03.0 00:02:39.407 00:02:39.407 User defined options 00:02:39.407 buildtype : debug 00:02:39.407 default_library : shared 00:02:39.407 libdir : lib 00:02:39.407 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:39.407 b_sanitize : address 00:02:39.407 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:39.407 c_link_args : 00:02:39.407 cpu_instruction_set: native 00:02:39.407 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:39.407 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:39.407 enable_docs : false 00:02:39.407 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:39.407 enable_kmods : false 00:02:39.407 max_lcores : 128 00:02:39.407 tests : false 00:02:39.407 00:02:39.407 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.407 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:39.407 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.407 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.407 [3/268] Linking static target lib/librte_kvargs.a 00:02:39.407 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.407 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.407 [6/268] Linking static target lib/librte_log.a 00:02:39.407 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.407 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.407 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.667 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.667 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.667 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.667 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.667 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.667 [15/268] Linking static target lib/librte_telemetry.a 00:02:39.667 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.667 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.667 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.926 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.185 [20/268] Linking target lib/librte_log.so.24.1 00:02:40.185 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.185 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.185 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.444 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.444 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.444 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.444 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.444 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.444 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.444 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.444 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:40.444 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:40.444 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.444 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.704 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.704 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.704 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.704 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.963 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.963 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.963 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.963 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.963 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.963 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.222 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.222 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.480 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.480 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.480 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.480 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.480 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.480 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.740 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.740 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.740 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.740 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.740 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.998 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.998 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.998 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.998 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.257 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.257 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.257 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.257 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.257 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.517 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.517 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.791 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.791 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.791 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.791 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.791 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.791 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.791 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.791 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.068 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.068 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.068 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.068 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.068 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.326 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.326 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.326 [84/268] Linking static target lib/librte_ring.a 00:02:43.586 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.586 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.586 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.586 [88/268] Linking static target lib/librte_eal.a 00:02:43.845 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.845 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.845 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.845 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.845 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.845 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.104 [95/268] Linking static target lib/librte_mempool.a 00:02:44.104 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.104 [97/268] Linking static target lib/librte_rcu.a 00:02:44.105 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.371 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.371 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.371 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.371 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.371 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.371 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.630 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.630 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.888 [107/268] Linking static target lib/librte_net.a 00:02:44.888 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.888 [109/268] Linking static target lib/librte_meter.a 00:02:44.888 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.888 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.888 [112/268] Linking static target lib/librte_mbuf.a 00:02:45.146 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.146 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.146 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.146 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.404 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.404 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.404 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.663 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.922 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.181 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.181 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.181 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.181 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.181 [126/268] Linking static target lib/librte_pci.a 00:02:46.181 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.181 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.440 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.440 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.440 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.440 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:46.699 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.699 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.699 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.699 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.699 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.699 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.699 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.699 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:46.699 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.699 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.957 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:46.957 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:46.957 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.957 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:46.957 [147/268] Linking static target lib/librte_cmdline.a 00:02:47.217 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.476 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:47.476 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.476 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.735 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.735 [153/268] Linking static target lib/librte_timer.a 00:02:47.735 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.735 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.994 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.994 [157/268] Linking static target lib/librte_ethdev.a 00:02:47.994 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.994 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.994 [160/268] Linking static target lib/librte_compressdev.a 00:02:47.994 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.994 [162/268] Linking static target lib/librte_hash.a 00:02:48.291 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.291 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.291 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.291 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.550 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.550 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.550 [169/268] Linking static target lib/librte_dmadev.a 00:02:48.809 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.809 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.809 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.809 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:49.067 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.067 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.327 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.327 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.327 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.327 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.327 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.327 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.587 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.588 [183/268] Linking static target lib/librte_cryptodev.a 00:02:49.588 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.848 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.848 [186/268] Linking static target lib/librte_power.a 00:02:49.848 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.107 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.107 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.107 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.366 [191/268] Linking static target lib/librte_security.a 00:02:50.366 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.366 [193/268] Linking static target lib/librte_reorder.a 00:02:50.934 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.934 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.934 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.934 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.194 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.194 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.454 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.454 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.454 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.713 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.713 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.713 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.973 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.973 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.973 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.973 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.233 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.233 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.233 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.233 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.233 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.492 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.492 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.492 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:52.492 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.492 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:52.492 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.492 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.752 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.752 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.752 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.752 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:52.752 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.011 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.582 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.532 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.532 [230/268] Linking target lib/librte_eal.so.24.1 00:02:54.790 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:54.790 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:54.790 [233/268] Linking target lib/librte_pci.so.24.1 00:02:54.790 [234/268] Linking target lib/librte_meter.so.24.1 00:02:54.790 [235/268] Linking target lib/librte_ring.so.24.1 00:02:54.790 [236/268] Linking target lib/librte_timer.so.24.1 00:02:54.790 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:54.790 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:54.790 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.048 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.048 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.048 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.048 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.048 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:55.048 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:55.048 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.048 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.307 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:55.307 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.307 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.307 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:55.307 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:55.307 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:55.307 [254/268] Linking target lib/librte_net.so.24.1 00:02:55.565 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.565 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.565 [257/268] Linking target lib/librte_security.so.24.1 00:02:55.565 [258/268] Linking target lib/librte_hash.so.24.1 00:02:55.565 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.823 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.757 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.757 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:56.758 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:57.016 [264/268] Linking target lib/librte_power.so.24.1 00:02:58.393 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.393 [266/268] Linking static target lib/librte_vhost.a 00:02:59.787 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.787 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:59.787 INFO: autodetecting backend as ninja 00:02:59.787 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:26.322 CC lib/ut_mock/mock.o 00:03:26.322 CC lib/log/log_flags.o 00:03:26.322 CC lib/log/log.o 00:03:26.322 CC lib/ut/ut.o 00:03:26.323 CC lib/log/log_deprecated.o 00:03:26.323 LIB libspdk_log.a 00:03:26.323 LIB libspdk_ut_mock.a 00:03:26.323 LIB libspdk_ut.a 00:03:26.323 SO libspdk_log.so.7.1 00:03:26.323 SO libspdk_ut_mock.so.6.0 00:03:26.323 SO libspdk_ut.so.2.0 00:03:26.323 SYMLINK libspdk_ut_mock.so 00:03:26.323 SYMLINK libspdk_log.so 00:03:26.323 SYMLINK libspdk_ut.so 00:03:26.323 CC lib/ioat/ioat.o 00:03:26.323 CC lib/util/base64.o 00:03:26.323 CC lib/util/bit_array.o 00:03:26.323 CC lib/dma/dma.o 00:03:26.323 CC lib/util/cpuset.o 00:03:26.323 CC lib/util/crc32.o 00:03:26.323 CC lib/util/crc16.o 00:03:26.323 CC lib/util/crc32c.o 00:03:26.323 CXX lib/trace_parser/trace.o 00:03:26.323 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.323 CC lib/util/crc32_ieee.o 00:03:26.323 CC lib/util/crc64.o 00:03:26.323 CC lib/vfio_user/host/vfio_user.o 00:03:26.323 CC lib/util/dif.o 00:03:26.323 LIB libspdk_dma.a 00:03:26.323 SO libspdk_dma.so.5.0 00:03:26.323 CC lib/util/fd.o 00:03:26.323 LIB libspdk_ioat.a 00:03:26.323 SO libspdk_ioat.so.7.0 00:03:26.323 SYMLINK libspdk_dma.so 00:03:26.323 CC lib/util/fd_group.o 00:03:26.323 CC lib/util/file.o 00:03:26.323 CC lib/util/hexlify.o 00:03:26.323 CC lib/util/iov.o 00:03:26.323 SYMLINK libspdk_ioat.so 00:03:26.323 CC lib/util/math.o 00:03:26.323 CC lib/util/net.o 00:03:26.323 CC lib/util/pipe.o 00:03:26.323 CC lib/util/strerror_tls.o 00:03:26.323 CC lib/util/string.o 00:03:26.323 CC lib/util/uuid.o 00:03:26.323 LIB libspdk_vfio_user.a 00:03:26.323 CC lib/util/xor.o 00:03:26.323 SO libspdk_vfio_user.so.5.0 00:03:26.323 CC lib/util/zipf.o 00:03:26.323 CC lib/util/md5.o 00:03:26.582 SYMLINK libspdk_vfio_user.so 00:03:26.840 LIB libspdk_util.a 00:03:26.840 SO libspdk_util.so.10.1 00:03:27.099 SYMLINK libspdk_util.so 00:03:27.099 LIB libspdk_trace_parser.a 00:03:27.099 SO libspdk_trace_parser.so.6.0 00:03:27.357 CC lib/json/json_parse.o 00:03:27.357 CC lib/json/json_util.o 00:03:27.357 CC lib/json/json_write.o 00:03:27.357 SYMLINK libspdk_trace_parser.so 00:03:27.357 CC lib/rdma_utils/rdma_utils.o 00:03:27.357 CC lib/vmd/vmd.o 00:03:27.357 CC lib/idxd/idxd.o 00:03:27.357 CC lib/vmd/led.o 00:03:27.357 CC lib/rdma_provider/common.o 00:03:27.357 CC lib/env_dpdk/env.o 00:03:27.357 CC lib/conf/conf.o 00:03:27.615 CC lib/env_dpdk/memory.o 00:03:27.615 LIB libspdk_rdma_utils.a 00:03:27.615 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.615 CC lib/env_dpdk/pci.o 00:03:27.615 SO libspdk_rdma_utils.so.1.0 00:03:27.875 LIB libspdk_conf.a 00:03:27.875 LIB libspdk_json.a 00:03:27.875 CC lib/env_dpdk/init.o 00:03:27.875 SYMLINK libspdk_rdma_utils.so 00:03:27.875 CC lib/idxd/idxd_user.o 00:03:27.875 SO libspdk_conf.so.6.0 00:03:27.875 SO libspdk_json.so.6.0 00:03:27.875 LIB libspdk_rdma_provider.a 00:03:27.875 SYMLINK libspdk_conf.so 00:03:27.875 CC lib/idxd/idxd_kernel.o 00:03:27.875 SO libspdk_rdma_provider.so.6.0 00:03:27.875 SYMLINK libspdk_json.so 00:03:27.875 CC lib/env_dpdk/threads.o 00:03:27.875 SYMLINK libspdk_rdma_provider.so 00:03:28.135 CC lib/env_dpdk/pci_ioat.o 00:03:28.135 CC lib/env_dpdk/pci_virtio.o 00:03:28.135 CC lib/jsonrpc/jsonrpc_server.o 00:03:28.135 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:28.135 LIB libspdk_idxd.a 00:03:28.135 CC lib/env_dpdk/pci_vmd.o 00:03:28.135 CC lib/env_dpdk/pci_idxd.o 00:03:28.395 SO libspdk_idxd.so.12.1 00:03:28.395 LIB libspdk_vmd.a 00:03:28.395 CC lib/env_dpdk/pci_event.o 00:03:28.395 SO libspdk_vmd.so.6.0 00:03:28.395 SYMLINK libspdk_idxd.so 00:03:28.395 CC lib/env_dpdk/sigbus_handler.o 00:03:28.395 CC lib/env_dpdk/pci_dpdk.o 00:03:28.395 CC lib/jsonrpc/jsonrpc_client.o 00:03:28.395 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.395 SYMLINK libspdk_vmd.so 00:03:28.395 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.395 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.654 LIB libspdk_jsonrpc.a 00:03:28.914 SO libspdk_jsonrpc.so.6.0 00:03:28.914 SYMLINK libspdk_jsonrpc.so 00:03:29.178 CC lib/rpc/rpc.o 00:03:29.437 LIB libspdk_env_dpdk.a 00:03:29.437 LIB libspdk_rpc.a 00:03:29.697 SO libspdk_rpc.so.6.0 00:03:29.697 SO libspdk_env_dpdk.so.15.1 00:03:29.697 SYMLINK libspdk_rpc.so 00:03:29.697 SYMLINK libspdk_env_dpdk.so 00:03:29.956 CC lib/trace/trace.o 00:03:29.956 CC lib/trace/trace_rpc.o 00:03:29.956 CC lib/trace/trace_flags.o 00:03:29.956 CC lib/notify/notify.o 00:03:29.956 CC lib/notify/notify_rpc.o 00:03:29.956 CC lib/keyring/keyring_rpc.o 00:03:29.956 CC lib/keyring/keyring.o 00:03:30.214 LIB libspdk_notify.a 00:03:30.214 SO libspdk_notify.so.6.0 00:03:30.214 LIB libspdk_trace.a 00:03:30.214 LIB libspdk_keyring.a 00:03:30.214 SYMLINK libspdk_notify.so 00:03:30.474 SO libspdk_keyring.so.2.0 00:03:30.474 SO libspdk_trace.so.11.0 00:03:30.474 SYMLINK libspdk_trace.so 00:03:30.474 SYMLINK libspdk_keyring.so 00:03:30.734 CC lib/thread/iobuf.o 00:03:30.734 CC lib/thread/thread.o 00:03:30.734 CC lib/sock/sock.o 00:03:30.734 CC lib/sock/sock_rpc.o 00:03:31.303 LIB libspdk_sock.a 00:03:31.303 SO libspdk_sock.so.10.0 00:03:31.562 SYMLINK libspdk_sock.so 00:03:31.822 CC lib/nvme/nvme_ctrlr.o 00:03:31.822 CC lib/nvme/nvme_fabric.o 00:03:31.822 CC lib/nvme/nvme_ns_cmd.o 00:03:31.822 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:31.822 CC lib/nvme/nvme_ns.o 00:03:31.822 CC lib/nvme/nvme_pcie.o 00:03:31.822 CC lib/nvme/nvme_pcie_common.o 00:03:31.822 CC lib/nvme/nvme_qpair.o 00:03:31.822 CC lib/nvme/nvme.o 00:03:32.760 CC lib/nvme/nvme_quirks.o 00:03:32.760 LIB libspdk_thread.a 00:03:32.760 CC lib/nvme/nvme_transport.o 00:03:32.760 SO libspdk_thread.so.11.0 00:03:32.760 CC lib/nvme/nvme_discovery.o 00:03:33.020 SYMLINK libspdk_thread.so 00:03:33.020 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:33.020 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:33.020 CC lib/nvme/nvme_tcp.o 00:03:33.020 CC lib/nvme/nvme_opal.o 00:03:33.279 CC lib/nvme/nvme_io_msg.o 00:03:33.539 CC lib/nvme/nvme_poll_group.o 00:03:33.539 CC lib/nvme/nvme_zns.o 00:03:33.539 CC lib/nvme/nvme_stubs.o 00:03:33.539 CC lib/nvme/nvme_auth.o 00:03:33.797 CC lib/accel/accel.o 00:03:33.797 CC lib/blob/blobstore.o 00:03:33.797 CC lib/blob/request.o 00:03:33.797 CC lib/blob/zeroes.o 00:03:34.057 CC lib/blob/blob_bs_dev.o 00:03:34.057 CC lib/nvme/nvme_cuse.o 00:03:34.325 CC lib/nvme/nvme_rdma.o 00:03:34.325 CC lib/accel/accel_rpc.o 00:03:34.325 CC lib/init/json_config.o 00:03:34.584 CC lib/accel/accel_sw.o 00:03:34.584 CC lib/virtio/virtio.o 00:03:34.584 CC lib/init/subsystem.o 00:03:34.843 CC lib/init/subsystem_rpc.o 00:03:34.843 CC lib/init/rpc.o 00:03:34.843 CC lib/virtio/virtio_vhost_user.o 00:03:34.843 CC lib/virtio/virtio_vfio_user.o 00:03:35.102 LIB libspdk_init.a 00:03:35.102 CC lib/virtio/virtio_pci.o 00:03:35.102 SO libspdk_init.so.6.0 00:03:35.102 LIB libspdk_accel.a 00:03:35.102 SO libspdk_accel.so.16.0 00:03:35.102 SYMLINK libspdk_init.so 00:03:35.102 CC lib/fsdev/fsdev.o 00:03:35.102 CC lib/fsdev/fsdev_io.o 00:03:35.102 CC lib/fsdev/fsdev_rpc.o 00:03:35.102 SYMLINK libspdk_accel.so 00:03:35.361 CC lib/event/reactor.o 00:03:35.361 CC lib/event/app.o 00:03:35.361 CC lib/event/log_rpc.o 00:03:35.361 CC lib/event/app_rpc.o 00:03:35.361 CC lib/bdev/bdev.o 00:03:35.361 LIB libspdk_virtio.a 00:03:35.361 SO libspdk_virtio.so.7.0 00:03:35.621 CC lib/bdev/bdev_rpc.o 00:03:35.621 SYMLINK libspdk_virtio.so 00:03:35.621 CC lib/bdev/bdev_zone.o 00:03:35.621 CC lib/event/scheduler_static.o 00:03:35.621 CC lib/bdev/part.o 00:03:35.879 CC lib/bdev/scsi_nvme.o 00:03:35.879 LIB libspdk_nvme.a 00:03:35.879 LIB libspdk_event.a 00:03:35.879 LIB libspdk_fsdev.a 00:03:36.138 SO libspdk_event.so.14.0 00:03:36.138 SO libspdk_fsdev.so.2.0 00:03:36.138 SO libspdk_nvme.so.15.0 00:03:36.138 SYMLINK libspdk_event.so 00:03:36.138 SYMLINK libspdk_fsdev.so 00:03:36.398 SYMLINK libspdk_nvme.so 00:03:36.657 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:37.225 LIB libspdk_fuse_dispatcher.a 00:03:37.484 SO libspdk_fuse_dispatcher.so.1.0 00:03:37.484 SYMLINK libspdk_fuse_dispatcher.so 00:03:38.052 LIB libspdk_blob.a 00:03:38.310 SO libspdk_blob.so.11.0 00:03:38.310 SYMLINK libspdk_blob.so 00:03:38.877 CC lib/lvol/lvol.o 00:03:38.877 CC lib/blobfs/blobfs.o 00:03:38.877 CC lib/blobfs/tree.o 00:03:39.135 LIB libspdk_bdev.a 00:03:39.135 SO libspdk_bdev.so.17.0 00:03:39.393 SYMLINK libspdk_bdev.so 00:03:39.652 CC lib/nvmf/ctrlr.o 00:03:39.652 CC lib/nvmf/ctrlr_discovery.o 00:03:39.652 CC lib/nvmf/subsystem.o 00:03:39.652 CC lib/nvmf/ctrlr_bdev.o 00:03:39.652 CC lib/scsi/dev.o 00:03:39.652 CC lib/ublk/ublk.o 00:03:39.652 CC lib/nbd/nbd.o 00:03:39.652 CC lib/ftl/ftl_core.o 00:03:39.910 CC lib/scsi/lun.o 00:03:39.910 LIB libspdk_blobfs.a 00:03:39.910 SO libspdk_blobfs.so.10.0 00:03:39.910 CC lib/ftl/ftl_init.o 00:03:40.170 LIB libspdk_lvol.a 00:03:40.170 SYMLINK libspdk_blobfs.so 00:03:40.170 CC lib/scsi/port.o 00:03:40.170 SO libspdk_lvol.so.10.0 00:03:40.170 CC lib/nbd/nbd_rpc.o 00:03:40.170 CC lib/scsi/scsi.o 00:03:40.170 SYMLINK libspdk_lvol.so 00:03:40.170 CC lib/scsi/scsi_bdev.o 00:03:40.170 CC lib/ftl/ftl_layout.o 00:03:40.170 CC lib/ftl/ftl_debug.o 00:03:40.429 LIB libspdk_nbd.a 00:03:40.429 CC lib/ftl/ftl_io.o 00:03:40.429 CC lib/scsi/scsi_pr.o 00:03:40.429 SO libspdk_nbd.so.7.0 00:03:40.429 CC lib/ublk/ublk_rpc.o 00:03:40.429 SYMLINK libspdk_nbd.so 00:03:40.429 CC lib/scsi/scsi_rpc.o 00:03:40.429 CC lib/scsi/task.o 00:03:40.429 LIB libspdk_ublk.a 00:03:40.689 CC lib/ftl/ftl_sb.o 00:03:40.689 CC lib/nvmf/nvmf.o 00:03:40.689 SO libspdk_ublk.so.3.0 00:03:40.689 CC lib/ftl/ftl_l2p.o 00:03:40.689 CC lib/ftl/ftl_l2p_flat.o 00:03:40.689 SYMLINK libspdk_ublk.so 00:03:40.689 CC lib/ftl/ftl_nv_cache.o 00:03:40.689 CC lib/ftl/ftl_band.o 00:03:40.689 CC lib/ftl/ftl_band_ops.o 00:03:40.689 LIB libspdk_scsi.a 00:03:40.689 CC lib/ftl/ftl_writer.o 00:03:40.952 CC lib/ftl/ftl_rq.o 00:03:40.952 SO libspdk_scsi.so.9.0 00:03:40.952 CC lib/ftl/ftl_reloc.o 00:03:40.952 SYMLINK libspdk_scsi.so 00:03:40.952 CC lib/ftl/ftl_l2p_cache.o 00:03:40.952 CC lib/ftl/ftl_p2l.o 00:03:41.216 CC lib/ftl/ftl_p2l_log.o 00:03:41.216 CC lib/nvmf/nvmf_rpc.o 00:03:41.216 CC lib/ftl/mngt/ftl_mngt.o 00:03:41.216 CC lib/nvmf/transport.o 00:03:41.216 CC lib/nvmf/tcp.o 00:03:41.476 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:41.736 CC lib/iscsi/conn.o 00:03:41.736 CC lib/nvmf/stubs.o 00:03:41.736 CC lib/vhost/vhost.o 00:03:41.736 CC lib/iscsi/init_grp.o 00:03:41.736 CC lib/iscsi/iscsi.o 00:03:41.736 CC lib/vhost/vhost_rpc.o 00:03:41.995 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:41.995 CC lib/iscsi/param.o 00:03:41.995 CC lib/vhost/vhost_scsi.o 00:03:41.995 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.254 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.254 CC lib/nvmf/mdns_server.o 00:03:42.254 CC lib/nvmf/rdma.o 00:03:42.254 CC lib/nvmf/auth.o 00:03:42.254 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.513 CC lib/vhost/vhost_blk.o 00:03:42.513 CC lib/iscsi/portal_grp.o 00:03:42.513 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:42.772 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:42.772 CC lib/vhost/rte_vhost_user.o 00:03:42.772 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:42.772 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:42.772 CC lib/iscsi/tgt_node.o 00:03:43.031 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.031 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.291 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.291 CC lib/ftl/utils/ftl_conf.o 00:03:43.291 CC lib/ftl/utils/ftl_md.o 00:03:43.291 CC lib/ftl/utils/ftl_mempool.o 00:03:43.550 CC lib/iscsi/iscsi_subsystem.o 00:03:43.550 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.550 CC lib/iscsi/iscsi_rpc.o 00:03:43.550 CC lib/ftl/utils/ftl_property.o 00:03:43.550 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.550 CC lib/iscsi/task.o 00:03:43.550 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.550 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.809 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.809 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.809 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.809 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:43.809 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.809 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:44.068 LIB libspdk_vhost.a 00:03:44.068 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:44.068 SO libspdk_vhost.so.8.0 00:03:44.068 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:44.068 LIB libspdk_iscsi.a 00:03:44.068 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:44.068 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:44.068 CC lib/ftl/base/ftl_base_dev.o 00:03:44.068 CC lib/ftl/base/ftl_base_bdev.o 00:03:44.068 SO libspdk_iscsi.so.8.0 00:03:44.068 SYMLINK libspdk_vhost.so 00:03:44.068 CC lib/ftl/ftl_trace.o 00:03:44.327 SYMLINK libspdk_iscsi.so 00:03:44.327 LIB libspdk_ftl.a 00:03:44.896 SO libspdk_ftl.so.9.0 00:03:44.896 SYMLINK libspdk_ftl.so 00:03:45.156 LIB libspdk_nvmf.a 00:03:45.156 SO libspdk_nvmf.so.20.0 00:03:45.416 SYMLINK libspdk_nvmf.so 00:03:45.985 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.985 CC module/accel/iaa/accel_iaa.o 00:03:45.985 CC module/sock/posix/posix.o 00:03:45.985 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.985 CC module/accel/error/accel_error.o 00:03:45.985 CC module/accel/dsa/accel_dsa.o 00:03:45.985 CC module/blob/bdev/blob_bdev.o 00:03:45.985 CC module/keyring/file/keyring.o 00:03:45.985 CC module/accel/ioat/accel_ioat.o 00:03:45.985 CC module/fsdev/aio/fsdev_aio.o 00:03:45.985 LIB libspdk_env_dpdk_rpc.a 00:03:45.985 SO libspdk_env_dpdk_rpc.so.6.0 00:03:45.985 SYMLINK libspdk_env_dpdk_rpc.so 00:03:45.985 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:46.244 CC module/keyring/file/keyring_rpc.o 00:03:46.244 CC module/accel/error/accel_error_rpc.o 00:03:46.244 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.244 LIB libspdk_scheduler_dynamic.a 00:03:46.244 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.244 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.244 CC module/fsdev/aio/linux_aio_mgr.o 00:03:46.244 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.244 LIB libspdk_blob_bdev.a 00:03:46.244 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.244 SO libspdk_blob_bdev.so.11.0 00:03:46.244 LIB libspdk_keyring_file.a 00:03:46.244 LIB libspdk_accel_error.a 00:03:46.244 LIB libspdk_accel_ioat.a 00:03:46.244 SO libspdk_accel_error.so.2.0 00:03:46.244 LIB libspdk_accel_iaa.a 00:03:46.244 SO libspdk_keyring_file.so.2.0 00:03:46.244 SO libspdk_accel_ioat.so.6.0 00:03:46.244 SYMLINK libspdk_blob_bdev.so 00:03:46.244 SO libspdk_accel_iaa.so.3.0 00:03:46.244 SYMLINK libspdk_accel_error.so 00:03:46.244 SYMLINK libspdk_keyring_file.so 00:03:46.503 SYMLINK libspdk_accel_ioat.so 00:03:46.503 SYMLINK libspdk_accel_iaa.so 00:03:46.503 LIB libspdk_accel_dsa.a 00:03:46.503 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.503 SO libspdk_accel_dsa.so.5.0 00:03:46.503 SYMLINK libspdk_accel_dsa.so 00:03:46.503 CC module/keyring/linux/keyring.o 00:03:46.503 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.504 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.504 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.504 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.504 CC module/bdev/gpt/gpt.o 00:03:46.764 CC module/bdev/error/vbdev_error.o 00:03:46.764 CC module/bdev/delay/vbdev_delay.o 00:03:46.764 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.764 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.764 CC module/keyring/linux/keyring_rpc.o 00:03:46.764 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.764 LIB libspdk_scheduler_gscheduler.a 00:03:46.764 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.764 LIB libspdk_fsdev_aio.a 00:03:46.764 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.764 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.764 SO libspdk_fsdev_aio.so.1.0 00:03:46.764 LIB libspdk_sock_posix.a 00:03:46.764 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.764 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.764 CC module/bdev/error/vbdev_error_rpc.o 00:03:47.023 LIB libspdk_keyring_linux.a 00:03:47.023 SO libspdk_sock_posix.so.6.0 00:03:47.023 SYMLINK libspdk_fsdev_aio.so 00:03:47.023 SO libspdk_keyring_linux.so.1.0 00:03:47.023 SYMLINK libspdk_sock_posix.so 00:03:47.023 SYMLINK libspdk_keyring_linux.so 00:03:47.023 LIB libspdk_blobfs_bdev.a 00:03:47.023 SO libspdk_blobfs_bdev.so.6.0 00:03:47.023 LIB libspdk_bdev_error.a 00:03:47.023 LIB libspdk_bdev_delay.a 00:03:47.023 CC module/bdev/malloc/bdev_malloc.o 00:03:47.023 CC module/bdev/null/bdev_null.o 00:03:47.023 SO libspdk_bdev_error.so.6.0 00:03:47.023 SO libspdk_bdev_delay.so.6.0 00:03:47.023 SYMLINK libspdk_blobfs_bdev.so 00:03:47.023 CC module/bdev/passthru/vbdev_passthru.o 00:03:47.023 LIB libspdk_bdev_gpt.a 00:03:47.023 CC module/bdev/nvme/bdev_nvme.o 00:03:47.283 CC module/bdev/null/bdev_null_rpc.o 00:03:47.283 SO libspdk_bdev_gpt.so.6.0 00:03:47.283 SYMLINK libspdk_bdev_delay.so 00:03:47.283 SYMLINK libspdk_bdev_error.so 00:03:47.283 SYMLINK libspdk_bdev_gpt.so 00:03:47.283 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:47.283 LIB libspdk_bdev_lvol.a 00:03:47.283 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:47.283 CC module/bdev/raid/bdev_raid.o 00:03:47.283 SO libspdk_bdev_lvol.so.6.0 00:03:47.283 CC module/bdev/split/vbdev_split.o 00:03:47.283 LIB libspdk_bdev_null.a 00:03:47.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:47.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:47.543 SO libspdk_bdev_null.so.6.0 00:03:47.543 SYMLINK libspdk_bdev_lvol.so 00:03:47.543 CC module/bdev/raid/bdev_raid_rpc.o 00:03:47.543 CC module/bdev/raid/bdev_raid_sb.o 00:03:47.543 SYMLINK libspdk_bdev_null.so 00:03:47.543 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:47.543 LIB libspdk_bdev_passthru.a 00:03:47.543 SO libspdk_bdev_passthru.so.6.0 00:03:47.543 LIB libspdk_bdev_malloc.a 00:03:47.543 SO libspdk_bdev_malloc.so.6.0 00:03:47.543 SYMLINK libspdk_bdev_passthru.so 00:03:47.543 CC module/bdev/raid/raid0.o 00:03:47.543 CC module/bdev/raid/raid1.o 00:03:47.543 CC module/bdev/split/vbdev_split_rpc.o 00:03:47.543 SYMLINK libspdk_bdev_malloc.so 00:03:47.543 CC module/bdev/nvme/nvme_rpc.o 00:03:47.803 CC module/bdev/raid/concat.o 00:03:47.803 LIB libspdk_bdev_zone_block.a 00:03:47.803 CC module/bdev/raid/raid5f.o 00:03:47.803 SO libspdk_bdev_zone_block.so.6.0 00:03:47.803 LIB libspdk_bdev_split.a 00:03:47.803 SO libspdk_bdev_split.so.6.0 00:03:47.803 SYMLINK libspdk_bdev_zone_block.so 00:03:47.803 SYMLINK libspdk_bdev_split.so 00:03:47.803 CC module/bdev/nvme/bdev_mdns_client.o 00:03:47.803 CC module/bdev/nvme/vbdev_opal.o 00:03:48.063 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:48.063 CC module/bdev/aio/bdev_aio.o 00:03:48.063 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:48.063 CC module/bdev/ftl/bdev_ftl.o 00:03:48.063 CC module/bdev/iscsi/bdev_iscsi.o 00:03:48.063 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:48.326 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:48.326 CC module/bdev/aio/bdev_aio_rpc.o 00:03:48.326 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:48.326 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:48.326 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:48.326 LIB libspdk_bdev_ftl.a 00:03:48.326 LIB libspdk_bdev_aio.a 00:03:48.326 SO libspdk_bdev_ftl.so.6.0 00:03:48.598 SO libspdk_bdev_aio.so.6.0 00:03:48.598 SYMLINK libspdk_bdev_aio.so 00:03:48.598 SYMLINK libspdk_bdev_ftl.so 00:03:48.598 LIB libspdk_bdev_iscsi.a 00:03:48.598 LIB libspdk_bdev_raid.a 00:03:48.598 SO libspdk_bdev_iscsi.so.6.0 00:03:48.598 SO libspdk_bdev_raid.so.6.0 00:03:48.598 SYMLINK libspdk_bdev_iscsi.so 00:03:48.866 SYMLINK libspdk_bdev_raid.so 00:03:48.866 LIB libspdk_bdev_virtio.a 00:03:49.125 SO libspdk_bdev_virtio.so.6.0 00:03:49.125 SYMLINK libspdk_bdev_virtio.so 00:03:50.063 LIB libspdk_bdev_nvme.a 00:03:50.323 SO libspdk_bdev_nvme.so.7.1 00:03:50.323 SYMLINK libspdk_bdev_nvme.so 00:03:50.892 CC module/event/subsystems/vmd/vmd.o 00:03:50.892 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.892 CC module/event/subsystems/fsdev/fsdev.o 00:03:50.892 CC module/event/subsystems/sock/sock.o 00:03:50.892 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.892 CC module/event/subsystems/keyring/keyring.o 00:03:50.892 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.892 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.892 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.892 LIB libspdk_event_vhost_blk.a 00:03:51.152 LIB libspdk_event_fsdev.a 00:03:51.152 LIB libspdk_event_sock.a 00:03:51.152 LIB libspdk_event_vmd.a 00:03:51.152 LIB libspdk_event_keyring.a 00:03:51.152 LIB libspdk_event_scheduler.a 00:03:51.152 SO libspdk_event_vhost_blk.so.3.0 00:03:51.152 SO libspdk_event_fsdev.so.1.0 00:03:51.152 SO libspdk_event_sock.so.5.0 00:03:51.152 SO libspdk_event_vmd.so.6.0 00:03:51.152 SO libspdk_event_keyring.so.1.0 00:03:51.152 LIB libspdk_event_iobuf.a 00:03:51.152 SO libspdk_event_scheduler.so.4.0 00:03:51.152 SO libspdk_event_iobuf.so.3.0 00:03:51.152 SYMLINK libspdk_event_vhost_blk.so 00:03:51.152 SYMLINK libspdk_event_fsdev.so 00:03:51.152 SYMLINK libspdk_event_sock.so 00:03:51.152 SYMLINK libspdk_event_keyring.so 00:03:51.152 SYMLINK libspdk_event_vmd.so 00:03:51.152 SYMLINK libspdk_event_scheduler.so 00:03:51.152 SYMLINK libspdk_event_iobuf.so 00:03:51.412 CC module/event/subsystems/accel/accel.o 00:03:51.672 LIB libspdk_event_accel.a 00:03:51.672 SO libspdk_event_accel.so.6.0 00:03:51.931 SYMLINK libspdk_event_accel.so 00:03:52.191 CC module/event/subsystems/bdev/bdev.o 00:03:52.451 LIB libspdk_event_bdev.a 00:03:52.451 SO libspdk_event_bdev.so.6.0 00:03:52.451 SYMLINK libspdk_event_bdev.so 00:03:53.018 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:53.018 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:53.018 CC module/event/subsystems/ublk/ublk.o 00:03:53.018 CC module/event/subsystems/nbd/nbd.o 00:03:53.018 CC module/event/subsystems/scsi/scsi.o 00:03:53.018 LIB libspdk_event_ublk.a 00:03:53.018 LIB libspdk_event_nbd.a 00:03:53.018 LIB libspdk_event_scsi.a 00:03:53.018 SO libspdk_event_ublk.so.3.0 00:03:53.018 SO libspdk_event_nbd.so.6.0 00:03:53.018 SO libspdk_event_scsi.so.6.0 00:03:53.018 LIB libspdk_event_nvmf.a 00:03:53.018 SYMLINK libspdk_event_ublk.so 00:03:53.018 SYMLINK libspdk_event_nbd.so 00:03:53.018 SO libspdk_event_nvmf.so.6.0 00:03:53.277 SYMLINK libspdk_event_scsi.so 00:03:53.277 SYMLINK libspdk_event_nvmf.so 00:03:53.536 CC module/event/subsystems/iscsi/iscsi.o 00:03:53.536 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:53.796 LIB libspdk_event_iscsi.a 00:03:53.796 LIB libspdk_event_vhost_scsi.a 00:03:53.796 SO libspdk_event_iscsi.so.6.0 00:03:53.796 SO libspdk_event_vhost_scsi.so.3.0 00:03:53.796 SYMLINK libspdk_event_iscsi.so 00:03:53.796 SYMLINK libspdk_event_vhost_scsi.so 00:03:54.056 SO libspdk.so.6.0 00:03:54.056 SYMLINK libspdk.so 00:03:54.317 CC test/rpc_client/rpc_client_test.o 00:03:54.317 TEST_HEADER include/spdk/accel.h 00:03:54.317 TEST_HEADER include/spdk/accel_module.h 00:03:54.317 TEST_HEADER include/spdk/assert.h 00:03:54.317 TEST_HEADER include/spdk/barrier.h 00:03:54.317 TEST_HEADER include/spdk/base64.h 00:03:54.317 CXX app/trace/trace.o 00:03:54.317 TEST_HEADER include/spdk/bdev.h 00:03:54.317 TEST_HEADER include/spdk/bdev_module.h 00:03:54.317 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.317 TEST_HEADER include/spdk/bit_array.h 00:03:54.317 TEST_HEADER include/spdk/bit_pool.h 00:03:54.317 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.317 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.317 CC app/trace_record/trace_record.o 00:03:54.317 TEST_HEADER include/spdk/blobfs.h 00:03:54.317 TEST_HEADER include/spdk/blob.h 00:03:54.317 TEST_HEADER include/spdk/conf.h 00:03:54.317 TEST_HEADER include/spdk/config.h 00:03:54.317 TEST_HEADER include/spdk/cpuset.h 00:03:54.317 TEST_HEADER include/spdk/crc16.h 00:03:54.317 TEST_HEADER include/spdk/crc32.h 00:03:54.317 TEST_HEADER include/spdk/crc64.h 00:03:54.317 TEST_HEADER include/spdk/dif.h 00:03:54.317 TEST_HEADER include/spdk/dma.h 00:03:54.317 TEST_HEADER include/spdk/endian.h 00:03:54.317 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.317 TEST_HEADER include/spdk/env.h 00:03:54.317 TEST_HEADER include/spdk/event.h 00:03:54.317 TEST_HEADER include/spdk/fd_group.h 00:03:54.317 TEST_HEADER include/spdk/fd.h 00:03:54.317 TEST_HEADER include/spdk/file.h 00:03:54.317 TEST_HEADER include/spdk/fsdev.h 00:03:54.317 TEST_HEADER include/spdk/fsdev_module.h 00:03:54.317 TEST_HEADER include/spdk/ftl.h 00:03:54.317 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:54.317 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.317 TEST_HEADER include/spdk/hexlify.h 00:03:54.317 TEST_HEADER include/spdk/histogram_data.h 00:03:54.317 CC examples/util/zipf/zipf.o 00:03:54.317 TEST_HEADER include/spdk/idxd.h 00:03:54.317 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.317 TEST_HEADER include/spdk/init.h 00:03:54.317 TEST_HEADER include/spdk/ioat.h 00:03:54.317 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.583 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.583 TEST_HEADER include/spdk/json.h 00:03:54.583 CC examples/ioat/perf/perf.o 00:03:54.583 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.583 TEST_HEADER include/spdk/keyring.h 00:03:54.583 TEST_HEADER include/spdk/keyring_module.h 00:03:54.583 TEST_HEADER include/spdk/likely.h 00:03:54.583 TEST_HEADER include/spdk/log.h 00:03:54.583 TEST_HEADER include/spdk/lvol.h 00:03:54.584 CC test/thread/poller_perf/poller_perf.o 00:03:54.584 TEST_HEADER include/spdk/md5.h 00:03:54.584 TEST_HEADER include/spdk/memory.h 00:03:54.584 TEST_HEADER include/spdk/mmio.h 00:03:54.584 TEST_HEADER include/spdk/nbd.h 00:03:54.584 TEST_HEADER include/spdk/net.h 00:03:54.584 TEST_HEADER include/spdk/notify.h 00:03:54.584 TEST_HEADER include/spdk/nvme.h 00:03:54.584 CC test/app/bdev_svc/bdev_svc.o 00:03:54.584 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.584 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.584 CC test/dma/test_dma/test_dma.o 00:03:54.584 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.584 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.584 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.584 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.584 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.584 TEST_HEADER include/spdk/nvmf.h 00:03:54.584 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.584 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.584 TEST_HEADER include/spdk/opal.h 00:03:54.584 TEST_HEADER include/spdk/opal_spec.h 00:03:54.584 TEST_HEADER include/spdk/pci_ids.h 00:03:54.584 TEST_HEADER include/spdk/pipe.h 00:03:54.584 TEST_HEADER include/spdk/queue.h 00:03:54.584 TEST_HEADER include/spdk/reduce.h 00:03:54.584 CC test/env/mem_callbacks/mem_callbacks.o 00:03:54.584 TEST_HEADER include/spdk/rpc.h 00:03:54.584 TEST_HEADER include/spdk/scheduler.h 00:03:54.584 TEST_HEADER include/spdk/scsi.h 00:03:54.584 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.584 TEST_HEADER include/spdk/sock.h 00:03:54.584 TEST_HEADER include/spdk/stdinc.h 00:03:54.584 TEST_HEADER include/spdk/string.h 00:03:54.584 TEST_HEADER include/spdk/thread.h 00:03:54.584 TEST_HEADER include/spdk/trace.h 00:03:54.584 LINK rpc_client_test 00:03:54.584 TEST_HEADER include/spdk/trace_parser.h 00:03:54.584 TEST_HEADER include/spdk/tree.h 00:03:54.584 TEST_HEADER include/spdk/ublk.h 00:03:54.584 TEST_HEADER include/spdk/util.h 00:03:54.584 TEST_HEADER include/spdk/uuid.h 00:03:54.584 TEST_HEADER include/spdk/version.h 00:03:54.584 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.584 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.584 TEST_HEADER include/spdk/vhost.h 00:03:54.584 TEST_HEADER include/spdk/vmd.h 00:03:54.584 TEST_HEADER include/spdk/xor.h 00:03:54.584 TEST_HEADER include/spdk/zipf.h 00:03:54.584 CXX test/cpp_headers/accel.o 00:03:54.584 LINK zipf 00:03:54.584 LINK bdev_svc 00:03:54.584 LINK spdk_trace_record 00:03:54.843 LINK ioat_perf 00:03:54.843 LINK poller_perf 00:03:54.843 LINK spdk_trace 00:03:54.843 CC test/env/vtophys/vtophys.o 00:03:54.843 CXX test/cpp_headers/accel_module.o 00:03:54.843 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:54.843 CC examples/ioat/verify/verify.o 00:03:55.101 LINK vtophys 00:03:55.101 CC app/nvmf_tgt/nvmf_main.o 00:03:55.101 LINK env_dpdk_post_init 00:03:55.101 CC test/env/memory/memory_ut.o 00:03:55.101 LINK test_dma 00:03:55.101 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:55.101 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:55.101 CXX test/cpp_headers/assert.o 00:03:55.101 LINK mem_callbacks 00:03:55.359 LINK nvmf_tgt 00:03:55.359 LINK verify 00:03:55.359 CXX test/cpp_headers/barrier.o 00:03:55.359 CXX test/cpp_headers/base64.o 00:03:55.359 CC app/iscsi_tgt/iscsi_tgt.o 00:03:55.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:55.618 CXX test/cpp_headers/bdev.o 00:03:55.618 LINK iscsi_tgt 00:03:55.618 CC examples/thread/thread/thread_ex.o 00:03:55.618 CC test/event/event_perf/event_perf.o 00:03:55.618 LINK nvme_fuzz 00:03:55.618 CC examples/sock/hello_world/hello_sock.o 00:03:55.618 CC examples/vmd/lsvmd/lsvmd.o 00:03:55.618 LINK interrupt_tgt 00:03:55.877 LINK event_perf 00:03:55.877 CXX test/cpp_headers/bdev_module.o 00:03:55.877 CXX test/cpp_headers/bdev_zone.o 00:03:55.877 LINK lsvmd 00:03:55.877 LINK thread 00:03:55.877 CC app/spdk_tgt/spdk_tgt.o 00:03:56.135 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.135 LINK hello_sock 00:03:56.135 CC test/event/reactor/reactor.o 00:03:56.135 CXX test/cpp_headers/bit_array.o 00:03:56.135 CC test/event/reactor_perf/reactor_perf.o 00:03:56.135 CC examples/vmd/led/led.o 00:03:56.135 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.432 LINK reactor_perf 00:03:56.432 CXX test/cpp_headers/bit_pool.o 00:03:56.432 CC test/event/app_repeat/app_repeat.o 00:03:56.432 LINK reactor 00:03:56.432 LINK spdk_tgt 00:03:56.432 LINK led 00:03:56.432 CC test/event/scheduler/scheduler.o 00:03:56.691 CXX test/cpp_headers/blob_bdev.o 00:03:56.691 LINK app_repeat 00:03:56.691 CXX test/cpp_headers/blobfs_bdev.o 00:03:56.691 CC examples/idxd/perf/perf.o 00:03:56.691 LINK scheduler 00:03:56.691 LINK memory_ut 00:03:56.952 CC app/spdk_lspci/spdk_lspci.o 00:03:56.952 LINK vhost_fuzz 00:03:56.952 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:56.952 CXX test/cpp_headers/blobfs.o 00:03:56.952 LINK spdk_lspci 00:03:57.210 LINK idxd_perf 00:03:57.210 CC examples/accel/perf/accel_perf.o 00:03:57.210 CC app/spdk_nvme_perf/perf.o 00:03:57.210 CC examples/blob/hello_world/hello_blob.o 00:03:57.210 CXX test/cpp_headers/blob.o 00:03:57.210 CC examples/nvme/hello_world/hello_world.o 00:03:57.210 CC test/env/pci/pci_ut.o 00:03:57.210 LINK iscsi_fuzz 00:03:57.468 CC examples/nvme/reconnect/reconnect.o 00:03:57.468 CXX test/cpp_headers/conf.o 00:03:57.468 LINK hello_fsdev 00:03:57.468 CC examples/blob/cli/blobcli.o 00:03:57.468 LINK hello_blob 00:03:57.468 LINK hello_world 00:03:57.468 CXX test/cpp_headers/config.o 00:03:57.468 CXX test/cpp_headers/cpuset.o 00:03:57.726 CC test/app/histogram_perf/histogram_perf.o 00:03:57.726 CXX test/cpp_headers/crc16.o 00:03:57.726 LINK pci_ut 00:03:57.726 LINK accel_perf 00:03:57.726 LINK histogram_perf 00:03:57.726 CC app/spdk_nvme_identify/identify.o 00:03:57.726 LINK reconnect 00:03:57.985 CXX test/cpp_headers/crc32.o 00:03:57.985 CC test/nvme/aer/aer.o 00:03:57.985 CC test/accel/dif/dif.o 00:03:57.985 LINK blobcli 00:03:57.985 CXX test/cpp_headers/crc64.o 00:03:57.985 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:57.985 CC test/app/jsoncat/jsoncat.o 00:03:58.245 CC test/app/stub/stub.o 00:03:58.245 CC test/blobfs/mkfs/mkfs.o 00:03:58.245 CXX test/cpp_headers/dif.o 00:03:58.245 LINK jsoncat 00:03:58.245 LINK aer 00:03:58.245 LINK spdk_nvme_perf 00:03:58.245 LINK stub 00:03:58.505 CXX test/cpp_headers/dma.o 00:03:58.505 LINK mkfs 00:03:58.505 CXX test/cpp_headers/endian.o 00:03:58.505 CC examples/bdev/hello_world/hello_bdev.o 00:03:58.505 CXX test/cpp_headers/env_dpdk.o 00:03:58.505 CXX test/cpp_headers/env.o 00:03:58.763 CC test/nvme/reset/reset.o 00:03:58.763 LINK nvme_manage 00:03:58.763 CC app/spdk_nvme_discover/discovery_aer.o 00:03:58.763 CC app/spdk_top/spdk_top.o 00:03:58.763 CXX test/cpp_headers/event.o 00:03:58.763 LINK hello_bdev 00:03:58.763 LINK dif 00:03:59.021 CC app/vhost/vhost.o 00:03:59.021 LINK spdk_nvme_identify 00:03:59.021 CC app/spdk_dd/spdk_dd.o 00:03:59.021 LINK spdk_nvme_discover 00:03:59.021 CC examples/nvme/arbitration/arbitration.o 00:03:59.021 CXX test/cpp_headers/fd_group.o 00:03:59.021 LINK reset 00:03:59.021 CXX test/cpp_headers/fd.o 00:03:59.279 CXX test/cpp_headers/file.o 00:03:59.279 CXX test/cpp_headers/fsdev.o 00:03:59.279 LINK vhost 00:03:59.279 CC examples/bdev/bdevperf/bdevperf.o 00:03:59.279 CXX test/cpp_headers/fsdev_module.o 00:03:59.279 LINK spdk_dd 00:03:59.537 LINK arbitration 00:03:59.538 CC test/nvme/sgl/sgl.o 00:03:59.538 CC test/nvme/e2edp/nvme_dp.o 00:03:59.538 CC test/lvol/esnap/esnap.o 00:03:59.538 CC test/bdev/bdevio/bdevio.o 00:03:59.538 CXX test/cpp_headers/ftl.o 00:03:59.538 CC app/fio/nvme/fio_plugin.o 00:03:59.797 CC app/fio/bdev/fio_plugin.o 00:03:59.797 CC examples/nvme/hotplug/hotplug.o 00:03:59.797 LINK sgl 00:03:59.797 LINK nvme_dp 00:03:59.797 LINK spdk_top 00:03:59.797 CXX test/cpp_headers/fuse_dispatcher.o 00:04:00.056 LINK hotplug 00:04:00.056 CXX test/cpp_headers/gpt_spec.o 00:04:00.056 CC test/nvme/overhead/overhead.o 00:04:00.056 CC test/nvme/err_injection/err_injection.o 00:04:00.315 CXX test/cpp_headers/hexlify.o 00:04:00.315 LINK bdevio 00:04:00.315 CC test/nvme/startup/startup.o 00:04:00.315 LINK bdevperf 00:04:00.315 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.315 LINK spdk_nvme 00:04:00.315 LINK spdk_bdev 00:04:00.315 CXX test/cpp_headers/histogram_data.o 00:04:00.315 LINK err_injection 00:04:00.573 LINK startup 00:04:00.573 LINK overhead 00:04:00.573 CXX test/cpp_headers/idxd.o 00:04:00.573 LINK cmb_copy 00:04:00.573 CC test/nvme/reserve/reserve.o 00:04:00.573 CC examples/nvme/abort/abort.o 00:04:00.573 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.832 CC test/nvme/connect_stress/connect_stress.o 00:04:00.832 CC test/nvme/simple_copy/simple_copy.o 00:04:00.832 CXX test/cpp_headers/idxd_spec.o 00:04:00.832 CC test/nvme/boot_partition/boot_partition.o 00:04:00.832 CXX test/cpp_headers/init.o 00:04:00.832 LINK pmr_persistence 00:04:00.832 LINK reserve 00:04:00.832 CC test/nvme/compliance/nvme_compliance.o 00:04:01.090 LINK connect_stress 00:04:01.090 LINK boot_partition 00:04:01.090 LINK simple_copy 00:04:01.090 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.090 CXX test/cpp_headers/ioat.o 00:04:01.090 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.090 CXX test/cpp_headers/ioat_spec.o 00:04:01.348 CXX test/cpp_headers/iscsi_spec.o 00:04:01.348 CC test/nvme/fdp/fdp.o 00:04:01.348 CC test/nvme/cuse/cuse.o 00:04:01.348 LINK abort 00:04:01.348 LINK nvme_compliance 00:04:01.348 CXX test/cpp_headers/json.o 00:04:01.348 CXX test/cpp_headers/jsonrpc.o 00:04:01.348 CXX test/cpp_headers/keyring.o 00:04:01.348 LINK fused_ordering 00:04:01.607 CXX test/cpp_headers/keyring_module.o 00:04:01.607 LINK doorbell_aers 00:04:01.607 CXX test/cpp_headers/likely.o 00:04:01.607 CXX test/cpp_headers/log.o 00:04:01.607 CXX test/cpp_headers/lvol.o 00:04:01.607 CXX test/cpp_headers/md5.o 00:04:01.607 LINK fdp 00:04:01.865 CXX test/cpp_headers/memory.o 00:04:01.865 CXX test/cpp_headers/mmio.o 00:04:01.865 CXX test/cpp_headers/nbd.o 00:04:01.865 CXX test/cpp_headers/net.o 00:04:01.865 CC examples/nvmf/nvmf/nvmf.o 00:04:01.865 CXX test/cpp_headers/notify.o 00:04:01.865 CXX test/cpp_headers/nvme.o 00:04:01.865 CXX test/cpp_headers/nvme_intel.o 00:04:01.865 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.123 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.123 CXX test/cpp_headers/nvme_spec.o 00:04:02.123 CXX test/cpp_headers/nvme_zns.o 00:04:02.123 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.123 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.123 CXX test/cpp_headers/nvmf.o 00:04:02.123 CXX test/cpp_headers/nvmf_spec.o 00:04:02.381 CXX test/cpp_headers/nvmf_transport.o 00:04:02.381 CXX test/cpp_headers/opal.o 00:04:02.381 CXX test/cpp_headers/opal_spec.o 00:04:02.381 CXX test/cpp_headers/pci_ids.o 00:04:02.381 CXX test/cpp_headers/pipe.o 00:04:02.381 CXX test/cpp_headers/queue.o 00:04:02.381 LINK nvmf 00:04:02.381 CXX test/cpp_headers/reduce.o 00:04:02.381 CXX test/cpp_headers/rpc.o 00:04:02.640 CXX test/cpp_headers/scheduler.o 00:04:02.640 CXX test/cpp_headers/scsi.o 00:04:02.640 CXX test/cpp_headers/scsi_spec.o 00:04:02.640 CXX test/cpp_headers/sock.o 00:04:02.640 CXX test/cpp_headers/stdinc.o 00:04:02.640 CXX test/cpp_headers/string.o 00:04:02.640 CXX test/cpp_headers/thread.o 00:04:02.640 CXX test/cpp_headers/trace.o 00:04:02.640 CXX test/cpp_headers/trace_parser.o 00:04:02.640 CXX test/cpp_headers/tree.o 00:04:02.900 CXX test/cpp_headers/ublk.o 00:04:02.900 CXX test/cpp_headers/util.o 00:04:02.900 CXX test/cpp_headers/uuid.o 00:04:02.900 CXX test/cpp_headers/version.o 00:04:02.900 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.900 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.900 CXX test/cpp_headers/vhost.o 00:04:02.900 CXX test/cpp_headers/vmd.o 00:04:02.900 CXX test/cpp_headers/xor.o 00:04:02.900 CXX test/cpp_headers/zipf.o 00:04:03.158 LINK cuse 00:04:06.443 LINK esnap 00:04:07.010 00:04:07.010 real 1m39.798s 00:04:07.010 user 8m38.845s 00:04:07.010 sys 1m41.437s 00:04:07.010 11:36:32 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:07.010 11:36:32 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.010 ************************************ 00:04:07.010 END TEST make 00:04:07.010 ************************************ 00:04:07.010 11:36:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.010 11:36:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.010 11:36:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.010 11:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.010 11:36:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.010 11:36:32 -- pm/common@44 -- $ pid=5471 00:04:07.010 11:36:32 -- pm/common@50 -- $ kill -TERM 5471 00:04:07.010 11:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.010 11:36:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.010 11:36:32 -- pm/common@44 -- $ pid=5473 00:04:07.010 11:36:32 -- pm/common@50 -- $ kill -TERM 5473 00:04:07.010 11:36:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:07.010 11:36:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:07.269 11:36:32 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.269 11:36:32 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.269 11:36:32 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.269 11:36:32 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.269 11:36:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.269 11:36:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.269 11:36:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.269 11:36:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.269 11:36:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.269 11:36:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.269 11:36:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.269 11:36:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.269 11:36:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.269 11:36:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.269 11:36:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.269 11:36:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.269 11:36:32 -- scripts/common.sh@345 -- # : 1 00:04:07.269 11:36:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.269 11:36:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.269 11:36:32 -- scripts/common.sh@365 -- # decimal 1 00:04:07.269 11:36:32 -- scripts/common.sh@353 -- # local d=1 00:04:07.269 11:36:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.269 11:36:32 -- scripts/common.sh@355 -- # echo 1 00:04:07.269 11:36:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.269 11:36:32 -- scripts/common.sh@366 -- # decimal 2 00:04:07.269 11:36:32 -- scripts/common.sh@353 -- # local d=2 00:04:07.269 11:36:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.269 11:36:32 -- scripts/common.sh@355 -- # echo 2 00:04:07.269 11:36:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.269 11:36:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.269 11:36:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.269 11:36:32 -- scripts/common.sh@368 -- # return 0 00:04:07.269 11:36:32 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.269 11:36:32 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.269 --rc genhtml_branch_coverage=1 00:04:07.269 --rc genhtml_function_coverage=1 00:04:07.269 --rc genhtml_legend=1 00:04:07.269 --rc geninfo_all_blocks=1 00:04:07.269 --rc geninfo_unexecuted_blocks=1 00:04:07.269 00:04:07.269 ' 00:04:07.269 11:36:32 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.269 --rc genhtml_branch_coverage=1 00:04:07.269 --rc genhtml_function_coverage=1 00:04:07.269 --rc genhtml_legend=1 00:04:07.269 --rc geninfo_all_blocks=1 00:04:07.269 --rc geninfo_unexecuted_blocks=1 00:04:07.269 00:04:07.269 ' 00:04:07.269 11:36:32 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.269 --rc genhtml_branch_coverage=1 00:04:07.269 --rc genhtml_function_coverage=1 00:04:07.269 --rc genhtml_legend=1 00:04:07.269 --rc geninfo_all_blocks=1 00:04:07.269 --rc geninfo_unexecuted_blocks=1 00:04:07.269 00:04:07.269 ' 00:04:07.269 11:36:32 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.269 --rc genhtml_branch_coverage=1 00:04:07.269 --rc genhtml_function_coverage=1 00:04:07.269 --rc genhtml_legend=1 00:04:07.269 --rc geninfo_all_blocks=1 00:04:07.269 --rc geninfo_unexecuted_blocks=1 00:04:07.269 00:04:07.269 ' 00:04:07.269 11:36:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.269 11:36:32 -- nvmf/common.sh@7 -- # uname -s 00:04:07.269 11:36:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.269 11:36:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.269 11:36:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.269 11:36:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.269 11:36:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.269 11:36:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.269 11:36:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.269 11:36:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.269 11:36:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.269 11:36:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.269 11:36:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8d4e942-011b-4e07-bdf8-d00d699eab30 00:04:07.269 11:36:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=b8d4e942-011b-4e07-bdf8-d00d699eab30 00:04:07.269 11:36:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.269 11:36:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.269 11:36:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.269 11:36:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.269 11:36:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.269 11:36:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.269 11:36:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.269 11:36:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.269 11:36:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.269 11:36:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.269 11:36:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.269 11:36:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.269 11:36:32 -- paths/export.sh@5 -- # export PATH 00:04:07.269 11:36:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.269 11:36:32 -- nvmf/common.sh@51 -- # : 0 00:04:07.269 11:36:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.269 11:36:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.269 11:36:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.269 11:36:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.269 11:36:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.269 11:36:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.269 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.269 11:36:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.269 11:36:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.269 11:36:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.269 11:36:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.269 11:36:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.269 11:36:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.269 11:36:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.269 11:36:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.269 11:36:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.269 11:36:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.269 11:36:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.269 11:36:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.269 11:36:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.269 11:36:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54603 00:04:07.269 11:36:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.269 11:36:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.269 11:36:32 -- pm/common@17 -- # local monitor 00:04:07.269 11:36:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.269 11:36:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.269 11:36:32 -- pm/common@25 -- # sleep 1 00:04:07.269 11:36:32 -- pm/common@21 -- # date +%s 00:04:07.269 11:36:32 -- pm/common@21 -- # date +%s 00:04:07.269 11:36:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730720192 00:04:07.269 11:36:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730720192 00:04:07.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730720192_collect-cpu-load.pm.log 00:04:07.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730720192_collect-vmstat.pm.log 00:04:08.464 11:36:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.464 11:36:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.464 11:36:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.464 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 11:36:33 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.464 11:36:33 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:08.464 11:36:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 11:36:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:08.464 11:36:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:08.464 11:36:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:08.464 11:36:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.464 11:36:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:08.464 11:36:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.464 11:36:33 -- common/autotest_common.sh@1455 -- # uname 00:04:08.464 11:36:33 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:08.464 11:36:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.464 11:36:33 -- common/autotest_common.sh@1475 -- # uname 00:04:08.464 11:36:33 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:08.464 11:36:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.464 11:36:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.464 lcov: LCOV version 1.15 00:04:08.464 11:36:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:26.548 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.548 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:41.450 11:37:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:41.450 11:37:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.450 11:37:06 -- common/autotest_common.sh@10 -- # set +x 00:04:41.450 11:37:06 -- spdk/autotest.sh@78 -- # rm -f 00:04:41.450 11:37:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.971 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:41.971 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:41.971 11:37:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:41.971 11:37:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:41.971 11:37:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:41.971 11:37:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:41.971 11:37:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:41.971 11:37:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:41.971 11:37:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:41.971 11:37:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:41.971 11:37:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:41.971 11:37:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:41.971 11:37:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:41.971 11:37:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:41.971 11:37:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:41.971 11:37:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:41.971 11:37:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:41.971 11:37:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:41.971 11:37:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:41.971 11:37:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:41.972 11:37:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:41.972 11:37:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:41.972 11:37:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.972 11:37:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.972 11:37:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:41.972 11:37:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:41.972 11:37:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.972 No valid GPT data, bailing 00:04:41.972 11:37:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.972 11:37:07 -- scripts/common.sh@394 -- # pt= 00:04:41.972 11:37:07 -- scripts/common.sh@395 -- # return 1 00:04:41.972 11:37:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.972 1+0 records in 00:04:41.972 1+0 records out 00:04:41.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420323 s, 249 MB/s 00:04:41.972 11:37:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.972 11:37:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.972 11:37:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:41.972 11:37:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:41.972 11:37:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:41.972 No valid GPT data, bailing 00:04:41.972 11:37:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:41.972 11:37:07 -- scripts/common.sh@394 -- # pt= 00:04:41.972 11:37:07 -- scripts/common.sh@395 -- # return 1 00:04:41.972 11:37:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:42.230 1+0 records in 00:04:42.230 1+0 records out 00:04:42.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639155 s, 164 MB/s 00:04:42.230 11:37:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.230 11:37:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.230 11:37:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:42.230 11:37:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:42.230 11:37:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:42.230 No valid GPT data, bailing 00:04:42.230 11:37:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:42.230 11:37:07 -- scripts/common.sh@394 -- # pt= 00:04:42.230 11:37:07 -- scripts/common.sh@395 -- # return 1 00:04:42.230 11:37:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:42.230 1+0 records in 00:04:42.230 1+0 records out 00:04:42.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457147 s, 229 MB/s 00:04:42.230 11:37:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.230 11:37:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.230 11:37:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:42.230 11:37:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:42.230 11:37:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:42.230 No valid GPT data, bailing 00:04:42.230 11:37:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:42.230 11:37:07 -- scripts/common.sh@394 -- # pt= 00:04:42.230 11:37:07 -- scripts/common.sh@395 -- # return 1 00:04:42.230 11:37:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:42.230 1+0 records in 00:04:42.230 1+0 records out 00:04:42.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00657506 s, 159 MB/s 00:04:42.230 11:37:07 -- spdk/autotest.sh@105 -- # sync 00:04:42.514 11:37:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:42.514 11:37:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:42.514 11:37:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:45.046 11:37:10 -- spdk/autotest.sh@111 -- # uname -s 00:04:45.046 11:37:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:45.046 11:37:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:45.046 11:37:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:45.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.982 Hugepages 00:04:45.982 node hugesize free / total 00:04:45.982 node0 1048576kB 0 / 0 00:04:45.982 node0 2048kB 0 / 0 00:04:45.982 00:04:45.982 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.982 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.982 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:46.240 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:46.240 11:37:11 -- spdk/autotest.sh@117 -- # uname -s 00:04:46.240 11:37:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:46.240 11:37:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:46.240 11:37:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.066 11:37:12 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:48.451 11:37:13 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:48.451 11:37:13 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:48.451 11:37:13 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.451 11:37:13 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:48.451 11:37:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:48.451 11:37:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:48.451 11:37:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.451 11:37:13 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:48.451 11:37:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:48.451 11:37:13 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:48.451 11:37:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:48.451 11:37:13 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.710 Waiting for block devices as requested 00:04:48.710 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.969 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.969 11:37:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:48.969 11:37:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:48.969 11:37:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:48.969 11:37:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:48.969 11:37:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1541 -- # continue 00:04:48.969 11:37:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:48.969 11:37:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:48.969 11:37:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:48.969 11:37:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:48.969 11:37:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:48.969 11:37:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:48.969 11:37:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:49.228 11:37:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:49.228 11:37:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:49.228 11:37:14 -- common/autotest_common.sh@1541 -- # continue 00:04:49.228 11:37:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:49.228 11:37:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.228 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.228 11:37:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:49.228 11:37:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.228 11:37:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.228 11:37:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.168 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.168 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.168 11:37:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:50.168 11:37:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.168 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.168 11:37:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:50.168 11:37:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:50.168 11:37:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:50.168 11:37:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:50.168 11:37:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:50.168 11:37:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:50.168 11:37:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:50.168 11:37:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:50.168 11:37:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:50.168 11:37:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:50.168 11:37:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.168 11:37:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:50.168 11:37:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:50.428 11:37:15 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:50.428 11:37:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:50.428 11:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:50.428 11:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:50.428 11:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:50.428 11:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:50.428 11:37:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:50.428 11:37:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:50.428 11:37:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:50.428 11:37:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:50.428 11:37:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:50.428 11:37:15 -- common/autotest_common.sh@1570 -- # return 0 00:04:50.428 11:37:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:50.428 11:37:15 -- common/autotest_common.sh@1578 -- # return 0 00:04:50.428 11:37:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:50.428 11:37:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:50.428 11:37:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.428 11:37:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:50.428 11:37:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:50.428 11:37:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.428 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.428 11:37:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:50.428 11:37:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:50.428 11:37:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.428 11:37:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.428 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.428 ************************************ 00:04:50.428 START TEST env 00:04:50.428 ************************************ 00:04:50.428 11:37:15 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:50.428 * Looking for test storage... 00:04:50.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:50.428 11:37:15 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.428 11:37:15 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.428 11:37:15 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.428 11:37:15 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.428 11:37:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.428 11:37:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.428 11:37:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.428 11:37:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.428 11:37:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.428 11:37:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.428 11:37:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.428 11:37:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.428 11:37:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.428 11:37:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.428 11:37:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.428 11:37:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:50.428 11:37:15 env -- scripts/common.sh@345 -- # : 1 00:04:50.428 11:37:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.428 11:37:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.428 11:37:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:50.428 11:37:15 env -- scripts/common.sh@353 -- # local d=1 00:04:50.428 11:37:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.428 11:37:15 env -- scripts/common.sh@355 -- # echo 1 00:04:50.428 11:37:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.428 11:37:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:50.428 11:37:15 env -- scripts/common.sh@353 -- # local d=2 00:04:50.688 11:37:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.688 11:37:15 env -- scripts/common.sh@355 -- # echo 2 00:04:50.688 11:37:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.688 11:37:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.688 11:37:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.688 11:37:15 env -- scripts/common.sh@368 -- # return 0 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.688 --rc genhtml_branch_coverage=1 00:04:50.688 --rc genhtml_function_coverage=1 00:04:50.688 --rc genhtml_legend=1 00:04:50.688 --rc geninfo_all_blocks=1 00:04:50.688 --rc geninfo_unexecuted_blocks=1 00:04:50.688 00:04:50.688 ' 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.688 --rc genhtml_branch_coverage=1 00:04:50.688 --rc genhtml_function_coverage=1 00:04:50.688 --rc genhtml_legend=1 00:04:50.688 --rc geninfo_all_blocks=1 00:04:50.688 --rc geninfo_unexecuted_blocks=1 00:04:50.688 00:04:50.688 ' 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.688 --rc genhtml_branch_coverage=1 00:04:50.688 --rc genhtml_function_coverage=1 00:04:50.688 --rc genhtml_legend=1 00:04:50.688 --rc geninfo_all_blocks=1 00:04:50.688 --rc geninfo_unexecuted_blocks=1 00:04:50.688 00:04:50.688 ' 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.688 --rc genhtml_branch_coverage=1 00:04:50.688 --rc genhtml_function_coverage=1 00:04:50.688 --rc genhtml_legend=1 00:04:50.688 --rc geninfo_all_blocks=1 00:04:50.688 --rc geninfo_unexecuted_blocks=1 00:04:50.688 00:04:50.688 ' 00:04:50.688 11:37:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.688 11:37:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.688 11:37:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.688 ************************************ 00:04:50.688 START TEST env_memory 00:04:50.688 ************************************ 00:04:50.688 11:37:15 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:50.688 00:04:50.688 00:04:50.688 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.688 http://cunit.sourceforge.net/ 00:04:50.688 00:04:50.688 00:04:50.688 Suite: memory 00:04:50.688 Test: alloc and free memory map ...[2024-11-04 11:37:16.036030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:50.688 passed 00:04:50.688 Test: mem map translation ...[2024-11-04 11:37:16.079732] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:50.688 [2024-11-04 11:37:16.079776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:50.688 [2024-11-04 11:37:16.079831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:50.688 [2024-11-04 11:37:16.079850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:50.688 passed 00:04:50.688 Test: mem map registration ...[2024-11-04 11:37:16.145540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:50.688 [2024-11-04 11:37:16.145582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:50.688 passed 00:04:50.949 Test: mem map adjacent registrations ...passed 00:04:50.949 00:04:50.949 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.949 suites 1 1 n/a 0 0 00:04:50.949 tests 4 4 4 0 0 00:04:50.949 asserts 152 152 152 0 n/a 00:04:50.949 00:04:50.949 Elapsed time = 0.239 seconds 00:04:50.949 00:04:50.949 real 0m0.293s 00:04:50.949 user 0m0.255s 00:04:50.949 sys 0m0.026s 00:04:50.949 11:37:16 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.949 11:37:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:50.949 ************************************ 00:04:50.949 END TEST env_memory 00:04:50.949 ************************************ 00:04:50.949 11:37:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.949 11:37:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.949 11:37:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.949 11:37:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.949 ************************************ 00:04:50.949 START TEST env_vtophys 00:04:50.949 ************************************ 00:04:50.949 11:37:16 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.949 EAL: lib.eal log level changed from notice to debug 00:04:50.949 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 1 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 2 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 3 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 4 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 5 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 6 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 7 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 8 as core 0 on socket 0 00:04:50.949 EAL: Detected lcore 9 as core 0 on socket 0 00:04:50.949 EAL: Maximum logical cores by configuration: 128 00:04:50.949 EAL: Detected CPU lcores: 10 00:04:50.949 EAL: Detected NUMA nodes: 1 00:04:50.949 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:50.949 EAL: Detected shared linkage of DPDK 00:04:50.949 EAL: No shared files mode enabled, IPC will be disabled 00:04:50.949 EAL: Selected IOVA mode 'PA' 00:04:50.949 EAL: Probing VFIO support... 00:04:50.949 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.949 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:50.949 EAL: Ask a virtual area of 0x2e000 bytes 00:04:50.949 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:50.949 EAL: Setting up physically contiguous memory... 00:04:50.949 EAL: Setting maximum number of open files to 524288 00:04:50.949 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:50.949 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:50.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.949 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:50.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.949 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:50.949 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:50.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.949 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:50.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.949 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:50.949 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:50.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.949 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:50.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.949 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:50.949 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:50.949 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.949 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:50.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.949 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.949 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:50.949 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:50.949 EAL: Hugepages will be freed exactly as allocated. 00:04:50.949 EAL: No shared files mode enabled, IPC is disabled 00:04:50.949 EAL: No shared files mode enabled, IPC is disabled 00:04:51.210 EAL: TSC frequency is ~2290000 KHz 00:04:51.210 EAL: Main lcore 0 is ready (tid=7f827bccda40;cpuset=[0]) 00:04:51.210 EAL: Trying to obtain current memory policy. 00:04:51.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.210 EAL: Restoring previous memory policy: 0 00:04:51.210 EAL: request: mp_malloc_sync 00:04:51.210 EAL: No shared files mode enabled, IPC is disabled 00:04:51.210 EAL: Heap on socket 0 was expanded by 2MB 00:04:51.210 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:51.210 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:51.210 EAL: Mem event callback 'spdk:(nil)' registered 00:04:51.210 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:51.210 00:04:51.210 00:04:51.210 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.210 http://cunit.sourceforge.net/ 00:04:51.210 00:04:51.210 00:04:51.210 Suite: components_suite 00:04:51.469 Test: vtophys_malloc_test ...passed 00:04:51.469 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:51.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.469 EAL: Restoring previous memory policy: 4 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was expanded by 4MB 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was shrunk by 4MB 00:04:51.469 EAL: Trying to obtain current memory policy. 00:04:51.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.469 EAL: Restoring previous memory policy: 4 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was expanded by 6MB 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was shrunk by 6MB 00:04:51.469 EAL: Trying to obtain current memory policy. 00:04:51.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.469 EAL: Restoring previous memory policy: 4 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was expanded by 10MB 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was shrunk by 10MB 00:04:51.469 EAL: Trying to obtain current memory policy. 00:04:51.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.469 EAL: Restoring previous memory policy: 4 00:04:51.469 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.469 EAL: request: mp_malloc_sync 00:04:51.469 EAL: No shared files mode enabled, IPC is disabled 00:04:51.469 EAL: Heap on socket 0 was expanded by 18MB 00:04:51.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.728 EAL: request: mp_malloc_sync 00:04:51.728 EAL: No shared files mode enabled, IPC is disabled 00:04:51.728 EAL: Heap on socket 0 was shrunk by 18MB 00:04:51.728 EAL: Trying to obtain current memory policy. 00:04:51.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.728 EAL: Restoring previous memory policy: 4 00:04:51.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.728 EAL: request: mp_malloc_sync 00:04:51.728 EAL: No shared files mode enabled, IPC is disabled 00:04:51.728 EAL: Heap on socket 0 was expanded by 34MB 00:04:51.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.728 EAL: request: mp_malloc_sync 00:04:51.728 EAL: No shared files mode enabled, IPC is disabled 00:04:51.728 EAL: Heap on socket 0 was shrunk by 34MB 00:04:51.728 EAL: Trying to obtain current memory policy. 00:04:51.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.728 EAL: Restoring previous memory policy: 4 00:04:51.728 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.728 EAL: request: mp_malloc_sync 00:04:51.728 EAL: No shared files mode enabled, IPC is disabled 00:04:51.728 EAL: Heap on socket 0 was expanded by 66MB 00:04:51.987 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.987 EAL: request: mp_malloc_sync 00:04:51.987 EAL: No shared files mode enabled, IPC is disabled 00:04:51.987 EAL: Heap on socket 0 was shrunk by 66MB 00:04:51.987 EAL: Trying to obtain current memory policy. 00:04:51.987 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.987 EAL: Restoring previous memory policy: 4 00:04:51.987 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.987 EAL: request: mp_malloc_sync 00:04:51.987 EAL: No shared files mode enabled, IPC is disabled 00:04:51.987 EAL: Heap on socket 0 was expanded by 130MB 00:04:52.246 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.246 EAL: request: mp_malloc_sync 00:04:52.246 EAL: No shared files mode enabled, IPC is disabled 00:04:52.246 EAL: Heap on socket 0 was shrunk by 130MB 00:04:52.505 EAL: Trying to obtain current memory policy. 00:04:52.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.505 EAL: Restoring previous memory policy: 4 00:04:52.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.505 EAL: request: mp_malloc_sync 00:04:52.505 EAL: No shared files mode enabled, IPC is disabled 00:04:52.505 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.073 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.073 EAL: request: mp_malloc_sync 00:04:53.073 EAL: No shared files mode enabled, IPC is disabled 00:04:53.073 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.641 EAL: Trying to obtain current memory policy. 00:04:53.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.641 EAL: Restoring previous memory policy: 4 00:04:53.641 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.641 EAL: request: mp_malloc_sync 00:04:53.641 EAL: No shared files mode enabled, IPC is disabled 00:04:53.641 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.578 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.578 EAL: request: mp_malloc_sync 00:04:54.578 EAL: No shared files mode enabled, IPC is disabled 00:04:54.578 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.515 EAL: Trying to obtain current memory policy. 00:04:55.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.775 EAL: Restoring previous memory policy: 4 00:04:55.775 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.775 EAL: request: mp_malloc_sync 00:04:55.775 EAL: No shared files mode enabled, IPC is disabled 00:04:55.775 EAL: Heap on socket 0 was expanded by 1026MB 00:04:57.679 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.679 EAL: request: mp_malloc_sync 00:04:57.679 EAL: No shared files mode enabled, IPC is disabled 00:04:57.679 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.584 passed 00:04:59.584 00:04:59.584 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.584 suites 1 1 n/a 0 0 00:04:59.584 tests 2 2 2 0 0 00:04:59.584 asserts 5810 5810 5810 0 n/a 00:04:59.584 00:04:59.584 Elapsed time = 8.490 seconds 00:04:59.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.584 EAL: request: mp_malloc_sync 00:04:59.584 EAL: No shared files mode enabled, IPC is disabled 00:04:59.584 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.584 EAL: No shared files mode enabled, IPC is disabled 00:04:59.844 EAL: No shared files mode enabled, IPC is disabled 00:04:59.844 EAL: No shared files mode enabled, IPC is disabled 00:04:59.844 00:04:59.844 real 0m8.827s 00:04:59.844 user 0m7.845s 00:04:59.844 sys 0m0.817s 00:04:59.844 ************************************ 00:04:59.844 END TEST env_vtophys 00:04:59.844 ************************************ 00:04:59.844 11:37:25 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.844 11:37:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.844 11:37:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.844 11:37:25 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.844 11:37:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.844 11:37:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.844 ************************************ 00:04:59.844 START TEST env_pci 00:04:59.844 ************************************ 00:04:59.844 11:37:25 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:59.844 00:04:59.844 00:04:59.844 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.844 http://cunit.sourceforge.net/ 00:04:59.844 00:04:59.844 00:04:59.844 Suite: pci 00:04:59.844 Test: pci_hook ...[2024-11-04 11:37:25.248989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56945 has claimed it 00:04:59.844 EAL: Cannot find device (10000:00:01.0) 00:04:59.844 EAL: Failed to attach device on primary process 00:04:59.844 passed 00:04:59.844 00:04:59.844 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.844 suites 1 1 n/a 0 0 00:04:59.844 tests 1 1 1 0 0 00:04:59.844 asserts 25 25 25 0 n/a 00:04:59.844 00:04:59.844 Elapsed time = 0.007 seconds 00:04:59.844 00:04:59.844 real 0m0.100s 00:04:59.844 user 0m0.041s 00:04:59.844 sys 0m0.056s 00:04:59.844 11:37:25 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.844 11:37:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:59.844 ************************************ 00:04:59.844 END TEST env_pci 00:04:59.844 ************************************ 00:04:59.844 11:37:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.844 11:37:25 env -- env/env.sh@15 -- # uname 00:04:59.844 11:37:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.844 11:37:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.844 11:37:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.844 11:37:25 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:59.844 11:37:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.844 11:37:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.844 ************************************ 00:04:59.844 START TEST env_dpdk_post_init 00:04:59.844 ************************************ 00:04:59.844 11:37:25 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.104 EAL: Detected CPU lcores: 10 00:05:00.104 EAL: Detected NUMA nodes: 1 00:05:00.104 EAL: Detected shared linkage of DPDK 00:05:00.104 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.104 EAL: Selected IOVA mode 'PA' 00:05:00.104 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.104 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:00.104 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:00.364 Starting DPDK initialization... 00:05:00.364 Starting SPDK post initialization... 00:05:00.364 SPDK NVMe probe 00:05:00.364 Attaching to 0000:00:10.0 00:05:00.364 Attaching to 0000:00:11.0 00:05:00.364 Attached to 0000:00:10.0 00:05:00.364 Attached to 0000:00:11.0 00:05:00.364 Cleaning up... 00:05:00.364 00:05:00.364 real 0m0.294s 00:05:00.364 user 0m0.104s 00:05:00.364 sys 0m0.089s 00:05:00.364 11:37:25 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.364 ************************************ 00:05:00.364 END TEST env_dpdk_post_init 00:05:00.364 ************************************ 00:05:00.364 11:37:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.364 11:37:25 env -- env/env.sh@26 -- # uname 00:05:00.364 11:37:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.364 11:37:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.364 11:37:25 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.364 11:37:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.364 11:37:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.364 ************************************ 00:05:00.364 START TEST env_mem_callbacks 00:05:00.364 ************************************ 00:05:00.364 11:37:25 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.364 EAL: Detected CPU lcores: 10 00:05:00.364 EAL: Detected NUMA nodes: 1 00:05:00.364 EAL: Detected shared linkage of DPDK 00:05:00.364 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.364 EAL: Selected IOVA mode 'PA' 00:05:00.623 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.623 00:05:00.623 00:05:00.623 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.623 http://cunit.sourceforge.net/ 00:05:00.623 00:05:00.623 00:05:00.623 Suite: memory 00:05:00.623 Test: test ... 00:05:00.623 register 0x200000200000 2097152 00:05:00.623 malloc 3145728 00:05:00.623 register 0x200000400000 4194304 00:05:00.623 buf 0x2000004fffc0 len 3145728 PASSED 00:05:00.623 malloc 64 00:05:00.623 buf 0x2000004ffec0 len 64 PASSED 00:05:00.623 malloc 4194304 00:05:00.623 register 0x200000800000 6291456 00:05:00.623 buf 0x2000009fffc0 len 4194304 PASSED 00:05:00.623 free 0x2000004fffc0 3145728 00:05:00.623 free 0x2000004ffec0 64 00:05:00.623 unregister 0x200000400000 4194304 PASSED 00:05:00.623 free 0x2000009fffc0 4194304 00:05:00.623 unregister 0x200000800000 6291456 PASSED 00:05:00.623 malloc 8388608 00:05:00.623 register 0x200000400000 10485760 00:05:00.623 buf 0x2000005fffc0 len 8388608 PASSED 00:05:00.623 free 0x2000005fffc0 8388608 00:05:00.623 unregister 0x200000400000 10485760 PASSED 00:05:00.623 passed 00:05:00.623 00:05:00.623 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.623 suites 1 1 n/a 0 0 00:05:00.623 tests 1 1 1 0 0 00:05:00.623 asserts 15 15 15 0 n/a 00:05:00.623 00:05:00.623 Elapsed time = 0.090 seconds 00:05:00.623 00:05:00.623 real 0m0.300s 00:05:00.623 user 0m0.119s 00:05:00.623 sys 0m0.077s 00:05:00.623 11:37:26 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.623 11:37:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 ************************************ 00:05:00.623 END TEST env_mem_callbacks 00:05:00.623 ************************************ 00:05:00.623 00:05:00.623 real 0m10.334s 00:05:00.623 user 0m8.586s 00:05:00.623 sys 0m1.386s 00:05:00.623 ************************************ 00:05:00.623 END TEST env 00:05:00.623 ************************************ 00:05:00.623 11:37:26 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.623 11:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 11:37:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.623 11:37:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.623 11:37:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.623 11:37:26 -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 ************************************ 00:05:00.623 START TEST rpc 00:05:00.623 ************************************ 00:05:00.623 11:37:26 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.883 * Looking for test storage... 00:05:00.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.883 11:37:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.883 11:37:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.883 11:37:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.883 11:37:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.883 11:37:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.883 11:37:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.883 11:37:26 rpc -- scripts/common.sh@345 -- # : 1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.883 11:37:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.883 11:37:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.883 11:37:26 rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.883 11:37:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.883 11:37:26 rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.883 11:37:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.883 11:37:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.883 11:37:26 rpc -- scripts/common.sh@368 -- # return 0 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:00.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.883 --rc genhtml_branch_coverage=1 00:05:00.883 --rc genhtml_function_coverage=1 00:05:00.883 --rc genhtml_legend=1 00:05:00.883 --rc geninfo_all_blocks=1 00:05:00.883 --rc geninfo_unexecuted_blocks=1 00:05:00.883 00:05:00.883 ' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:00.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.883 --rc genhtml_branch_coverage=1 00:05:00.883 --rc genhtml_function_coverage=1 00:05:00.883 --rc genhtml_legend=1 00:05:00.883 --rc geninfo_all_blocks=1 00:05:00.883 --rc geninfo_unexecuted_blocks=1 00:05:00.883 00:05:00.883 ' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:00.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.883 --rc genhtml_branch_coverage=1 00:05:00.883 --rc genhtml_function_coverage=1 00:05:00.883 --rc genhtml_legend=1 00:05:00.883 --rc geninfo_all_blocks=1 00:05:00.883 --rc geninfo_unexecuted_blocks=1 00:05:00.883 00:05:00.883 ' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:00.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.883 --rc genhtml_branch_coverage=1 00:05:00.883 --rc genhtml_function_coverage=1 00:05:00.883 --rc genhtml_legend=1 00:05:00.883 --rc geninfo_all_blocks=1 00:05:00.883 --rc geninfo_unexecuted_blocks=1 00:05:00.883 00:05:00.883 ' 00:05:00.883 11:37:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57073 00:05:00.883 11:37:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:00.883 11:37:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.883 11:37:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57073 00:05:00.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@833 -- # '[' -z 57073 ']' 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.883 11:37:26 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.884 11:37:26 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.884 11:37:26 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.884 11:37:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.199 [2024-11-04 11:37:26.420105] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:01.199 [2024-11-04 11:37:26.420262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57073 ] 00:05:01.199 [2024-11-04 11:37:26.597272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.458 [2024-11-04 11:37:26.724044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.458 [2024-11-04 11:37:26.724106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57073' to capture a snapshot of events at runtime. 00:05:01.458 [2024-11-04 11:37:26.724117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.458 [2024-11-04 11:37:26.724128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.458 [2024-11-04 11:37:26.724137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57073 for offline analysis/debug. 00:05:01.458 [2024-11-04 11:37:26.725838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.427 11:37:27 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.427 11:37:27 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:02.427 11:37:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.427 11:37:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.427 11:37:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.427 11:37:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.427 11:37:27 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.427 11:37:27 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.427 11:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 ************************************ 00:05:02.427 START TEST rpc_integrity 00:05:02.427 ************************************ 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.427 { 00:05:02.427 "name": "Malloc0", 00:05:02.427 "aliases": [ 00:05:02.427 "14769e73-0214-4d7a-94ac-bc27f0128e67" 00:05:02.427 ], 00:05:02.427 "product_name": "Malloc disk", 00:05:02.427 "block_size": 512, 00:05:02.427 "num_blocks": 16384, 00:05:02.427 "uuid": "14769e73-0214-4d7a-94ac-bc27f0128e67", 00:05:02.427 "assigned_rate_limits": { 00:05:02.427 "rw_ios_per_sec": 0, 00:05:02.427 "rw_mbytes_per_sec": 0, 00:05:02.427 "r_mbytes_per_sec": 0, 00:05:02.427 "w_mbytes_per_sec": 0 00:05:02.427 }, 00:05:02.427 "claimed": false, 00:05:02.427 "zoned": false, 00:05:02.427 "supported_io_types": { 00:05:02.427 "read": true, 00:05:02.427 "write": true, 00:05:02.427 "unmap": true, 00:05:02.427 "flush": true, 00:05:02.427 "reset": true, 00:05:02.427 "nvme_admin": false, 00:05:02.427 "nvme_io": false, 00:05:02.427 "nvme_io_md": false, 00:05:02.427 "write_zeroes": true, 00:05:02.427 "zcopy": true, 00:05:02.427 "get_zone_info": false, 00:05:02.427 "zone_management": false, 00:05:02.427 "zone_append": false, 00:05:02.427 "compare": false, 00:05:02.427 "compare_and_write": false, 00:05:02.427 "abort": true, 00:05:02.427 "seek_hole": false, 00:05:02.427 "seek_data": false, 00:05:02.427 "copy": true, 00:05:02.427 "nvme_iov_md": false 00:05:02.427 }, 00:05:02.427 "memory_domains": [ 00:05:02.427 { 00:05:02.427 "dma_device_id": "system", 00:05:02.427 "dma_device_type": 1 00:05:02.427 }, 00:05:02.427 { 00:05:02.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.427 "dma_device_type": 2 00:05:02.427 } 00:05:02.427 ], 00:05:02.427 "driver_specific": {} 00:05:02.427 } 00:05:02.427 ]' 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 [2024-11-04 11:37:27.789724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.427 [2024-11-04 11:37:27.789801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.427 [2024-11-04 11:37:27.789832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:02.427 [2024-11-04 11:37:27.789849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.427 [2024-11-04 11:37:27.792465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.427 [2024-11-04 11:37:27.792571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.427 Passthru0 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.427 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.427 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.427 { 00:05:02.427 "name": "Malloc0", 00:05:02.427 "aliases": [ 00:05:02.427 "14769e73-0214-4d7a-94ac-bc27f0128e67" 00:05:02.427 ], 00:05:02.427 "product_name": "Malloc disk", 00:05:02.427 "block_size": 512, 00:05:02.427 "num_blocks": 16384, 00:05:02.427 "uuid": "14769e73-0214-4d7a-94ac-bc27f0128e67", 00:05:02.427 "assigned_rate_limits": { 00:05:02.427 "rw_ios_per_sec": 0, 00:05:02.427 "rw_mbytes_per_sec": 0, 00:05:02.427 "r_mbytes_per_sec": 0, 00:05:02.427 "w_mbytes_per_sec": 0 00:05:02.427 }, 00:05:02.427 "claimed": true, 00:05:02.427 "claim_type": "exclusive_write", 00:05:02.427 "zoned": false, 00:05:02.427 "supported_io_types": { 00:05:02.427 "read": true, 00:05:02.427 "write": true, 00:05:02.427 "unmap": true, 00:05:02.427 "flush": true, 00:05:02.427 "reset": true, 00:05:02.427 "nvme_admin": false, 00:05:02.427 "nvme_io": false, 00:05:02.427 "nvme_io_md": false, 00:05:02.427 "write_zeroes": true, 00:05:02.427 "zcopy": true, 00:05:02.427 "get_zone_info": false, 00:05:02.427 "zone_management": false, 00:05:02.427 "zone_append": false, 00:05:02.427 "compare": false, 00:05:02.427 "compare_and_write": false, 00:05:02.427 "abort": true, 00:05:02.427 "seek_hole": false, 00:05:02.427 "seek_data": false, 00:05:02.427 "copy": true, 00:05:02.427 "nvme_iov_md": false 00:05:02.427 }, 00:05:02.427 "memory_domains": [ 00:05:02.427 { 00:05:02.427 "dma_device_id": "system", 00:05:02.427 "dma_device_type": 1 00:05:02.427 }, 00:05:02.427 { 00:05:02.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.427 "dma_device_type": 2 00:05:02.427 } 00:05:02.427 ], 00:05:02.427 "driver_specific": {} 00:05:02.427 }, 00:05:02.427 { 00:05:02.427 "name": "Passthru0", 00:05:02.427 "aliases": [ 00:05:02.427 "d6585e58-b257-56f3-aad4-b0c565b1795f" 00:05:02.427 ], 00:05:02.427 "product_name": "passthru", 00:05:02.427 "block_size": 512, 00:05:02.427 "num_blocks": 16384, 00:05:02.427 "uuid": "d6585e58-b257-56f3-aad4-b0c565b1795f", 00:05:02.427 "assigned_rate_limits": { 00:05:02.427 "rw_ios_per_sec": 0, 00:05:02.427 "rw_mbytes_per_sec": 0, 00:05:02.427 "r_mbytes_per_sec": 0, 00:05:02.427 "w_mbytes_per_sec": 0 00:05:02.427 }, 00:05:02.427 "claimed": false, 00:05:02.427 "zoned": false, 00:05:02.427 "supported_io_types": { 00:05:02.427 "read": true, 00:05:02.427 "write": true, 00:05:02.427 "unmap": true, 00:05:02.427 "flush": true, 00:05:02.427 "reset": true, 00:05:02.427 "nvme_admin": false, 00:05:02.427 "nvme_io": false, 00:05:02.427 "nvme_io_md": false, 00:05:02.427 "write_zeroes": true, 00:05:02.427 "zcopy": true, 00:05:02.427 "get_zone_info": false, 00:05:02.427 "zone_management": false, 00:05:02.427 "zone_append": false, 00:05:02.427 "compare": false, 00:05:02.427 "compare_and_write": false, 00:05:02.427 "abort": true, 00:05:02.427 "seek_hole": false, 00:05:02.427 "seek_data": false, 00:05:02.427 "copy": true, 00:05:02.427 "nvme_iov_md": false 00:05:02.427 }, 00:05:02.427 "memory_domains": [ 00:05:02.427 { 00:05:02.427 "dma_device_id": "system", 00:05:02.427 "dma_device_type": 1 00:05:02.427 }, 00:05:02.427 { 00:05:02.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.427 "dma_device_type": 2 00:05:02.427 } 00:05:02.427 ], 00:05:02.427 "driver_specific": { 00:05:02.427 "passthru": { 00:05:02.427 "name": "Passthru0", 00:05:02.427 "base_bdev_name": "Malloc0" 00:05:02.428 } 00:05:02.428 } 00:05:02.428 } 00:05:02.428 ]' 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.428 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.428 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.685 11:37:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.685 00:05:02.685 real 0m0.319s 00:05:02.685 user 0m0.174s 00:05:02.685 sys 0m0.047s 00:05:02.685 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.685 11:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.685 ************************************ 00:05:02.685 END TEST rpc_integrity 00:05:02.685 ************************************ 00:05:02.685 11:37:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.685 11:37:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.685 11:37:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.685 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.685 ************************************ 00:05:02.685 START TEST rpc_plugins 00:05:02.685 ************************************ 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:02.685 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.685 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.685 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.685 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.685 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.685 { 00:05:02.685 "name": "Malloc1", 00:05:02.685 "aliases": [ 00:05:02.685 "d17b52fe-70f0-46b2-9d66-ba911bfb730f" 00:05:02.686 ], 00:05:02.686 "product_name": "Malloc disk", 00:05:02.686 "block_size": 4096, 00:05:02.686 "num_blocks": 256, 00:05:02.686 "uuid": "d17b52fe-70f0-46b2-9d66-ba911bfb730f", 00:05:02.686 "assigned_rate_limits": { 00:05:02.686 "rw_ios_per_sec": 0, 00:05:02.686 "rw_mbytes_per_sec": 0, 00:05:02.686 "r_mbytes_per_sec": 0, 00:05:02.686 "w_mbytes_per_sec": 0 00:05:02.686 }, 00:05:02.686 "claimed": false, 00:05:02.686 "zoned": false, 00:05:02.686 "supported_io_types": { 00:05:02.686 "read": true, 00:05:02.686 "write": true, 00:05:02.686 "unmap": true, 00:05:02.686 "flush": true, 00:05:02.686 "reset": true, 00:05:02.686 "nvme_admin": false, 00:05:02.686 "nvme_io": false, 00:05:02.686 "nvme_io_md": false, 00:05:02.686 "write_zeroes": true, 00:05:02.686 "zcopy": true, 00:05:02.686 "get_zone_info": false, 00:05:02.686 "zone_management": false, 00:05:02.686 "zone_append": false, 00:05:02.686 "compare": false, 00:05:02.686 "compare_and_write": false, 00:05:02.686 "abort": true, 00:05:02.686 "seek_hole": false, 00:05:02.686 "seek_data": false, 00:05:02.686 "copy": true, 00:05:02.686 "nvme_iov_md": false 00:05:02.686 }, 00:05:02.686 "memory_domains": [ 00:05:02.686 { 00:05:02.686 "dma_device_id": "system", 00:05:02.686 "dma_device_type": 1 00:05:02.686 }, 00:05:02.686 { 00:05:02.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.686 "dma_device_type": 2 00:05:02.686 } 00:05:02.686 ], 00:05:02.686 "driver_specific": {} 00:05:02.686 } 00:05:02.686 ]' 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.686 11:37:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.686 ************************************ 00:05:02.686 END TEST rpc_plugins 00:05:02.686 ************************************ 00:05:02.686 00:05:02.686 real 0m0.148s 00:05:02.686 user 0m0.086s 00:05:02.686 sys 0m0.026s 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.686 11:37:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.944 11:37:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.944 11:37:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.944 11:37:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.944 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.944 ************************************ 00:05:02.944 START TEST rpc_trace_cmd_test 00:05:02.944 ************************************ 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.944 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57073", 00:05:02.944 "tpoint_group_mask": "0x8", 00:05:02.944 "iscsi_conn": { 00:05:02.944 "mask": "0x2", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "scsi": { 00:05:02.944 "mask": "0x4", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "bdev": { 00:05:02.944 "mask": "0x8", 00:05:02.944 "tpoint_mask": "0xffffffffffffffff" 00:05:02.944 }, 00:05:02.944 "nvmf_rdma": { 00:05:02.944 "mask": "0x10", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "nvmf_tcp": { 00:05:02.944 "mask": "0x20", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "ftl": { 00:05:02.944 "mask": "0x40", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "blobfs": { 00:05:02.944 "mask": "0x80", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "dsa": { 00:05:02.944 "mask": "0x200", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "thread": { 00:05:02.944 "mask": "0x400", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "nvme_pcie": { 00:05:02.944 "mask": "0x800", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "iaa": { 00:05:02.944 "mask": "0x1000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "nvme_tcp": { 00:05:02.944 "mask": "0x2000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "bdev_nvme": { 00:05:02.944 "mask": "0x4000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "sock": { 00:05:02.944 "mask": "0x8000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "blob": { 00:05:02.944 "mask": "0x10000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "bdev_raid": { 00:05:02.944 "mask": "0x20000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 }, 00:05:02.944 "scheduler": { 00:05:02.944 "mask": "0x40000", 00:05:02.944 "tpoint_mask": "0x0" 00:05:02.944 } 00:05:02.944 }' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.944 ************************************ 00:05:02.944 END TEST rpc_trace_cmd_test 00:05:02.944 ************************************ 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.944 00:05:02.944 real 0m0.224s 00:05:02.944 user 0m0.186s 00:05:02.944 sys 0m0.029s 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.944 11:37:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 11:37:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.202 11:37:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.202 11:37:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.202 11:37:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.202 11:37:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.202 11:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 ************************************ 00:05:03.202 START TEST rpc_daemon_integrity 00:05:03.202 ************************************ 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.202 { 00:05:03.202 "name": "Malloc2", 00:05:03.202 "aliases": [ 00:05:03.202 "73d4e4e2-85c0-4ccf-9e07-0e84fd3d0d58" 00:05:03.202 ], 00:05:03.202 "product_name": "Malloc disk", 00:05:03.202 "block_size": 512, 00:05:03.202 "num_blocks": 16384, 00:05:03.202 "uuid": "73d4e4e2-85c0-4ccf-9e07-0e84fd3d0d58", 00:05:03.202 "assigned_rate_limits": { 00:05:03.202 "rw_ios_per_sec": 0, 00:05:03.202 "rw_mbytes_per_sec": 0, 00:05:03.202 "r_mbytes_per_sec": 0, 00:05:03.202 "w_mbytes_per_sec": 0 00:05:03.202 }, 00:05:03.202 "claimed": false, 00:05:03.202 "zoned": false, 00:05:03.202 "supported_io_types": { 00:05:03.202 "read": true, 00:05:03.202 "write": true, 00:05:03.202 "unmap": true, 00:05:03.202 "flush": true, 00:05:03.202 "reset": true, 00:05:03.202 "nvme_admin": false, 00:05:03.202 "nvme_io": false, 00:05:03.202 "nvme_io_md": false, 00:05:03.202 "write_zeroes": true, 00:05:03.202 "zcopy": true, 00:05:03.202 "get_zone_info": false, 00:05:03.202 "zone_management": false, 00:05:03.202 "zone_append": false, 00:05:03.202 "compare": false, 00:05:03.202 "compare_and_write": false, 00:05:03.202 "abort": true, 00:05:03.202 "seek_hole": false, 00:05:03.202 "seek_data": false, 00:05:03.202 "copy": true, 00:05:03.202 "nvme_iov_md": false 00:05:03.202 }, 00:05:03.202 "memory_domains": [ 00:05:03.202 { 00:05:03.202 "dma_device_id": "system", 00:05:03.202 "dma_device_type": 1 00:05:03.202 }, 00:05:03.202 { 00:05:03.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.202 "dma_device_type": 2 00:05:03.202 } 00:05:03.202 ], 00:05:03.202 "driver_specific": {} 00:05:03.202 } 00:05:03.202 ]' 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 [2024-11-04 11:37:28.663961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.202 [2024-11-04 11:37:28.664034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.202 [2024-11-04 11:37:28.664060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:03.202 [2024-11-04 11:37:28.664072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.202 [2024-11-04 11:37:28.666692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.202 [2024-11-04 11:37:28.666736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.202 Passthru0 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.202 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.202 { 00:05:03.202 "name": "Malloc2", 00:05:03.202 "aliases": [ 00:05:03.202 "73d4e4e2-85c0-4ccf-9e07-0e84fd3d0d58" 00:05:03.202 ], 00:05:03.202 "product_name": "Malloc disk", 00:05:03.202 "block_size": 512, 00:05:03.202 "num_blocks": 16384, 00:05:03.202 "uuid": "73d4e4e2-85c0-4ccf-9e07-0e84fd3d0d58", 00:05:03.202 "assigned_rate_limits": { 00:05:03.202 "rw_ios_per_sec": 0, 00:05:03.202 "rw_mbytes_per_sec": 0, 00:05:03.202 "r_mbytes_per_sec": 0, 00:05:03.202 "w_mbytes_per_sec": 0 00:05:03.202 }, 00:05:03.202 "claimed": true, 00:05:03.202 "claim_type": "exclusive_write", 00:05:03.202 "zoned": false, 00:05:03.202 "supported_io_types": { 00:05:03.202 "read": true, 00:05:03.203 "write": true, 00:05:03.203 "unmap": true, 00:05:03.203 "flush": true, 00:05:03.203 "reset": true, 00:05:03.203 "nvme_admin": false, 00:05:03.203 "nvme_io": false, 00:05:03.203 "nvme_io_md": false, 00:05:03.203 "write_zeroes": true, 00:05:03.203 "zcopy": true, 00:05:03.203 "get_zone_info": false, 00:05:03.203 "zone_management": false, 00:05:03.203 "zone_append": false, 00:05:03.203 "compare": false, 00:05:03.203 "compare_and_write": false, 00:05:03.203 "abort": true, 00:05:03.203 "seek_hole": false, 00:05:03.203 "seek_data": false, 00:05:03.203 "copy": true, 00:05:03.203 "nvme_iov_md": false 00:05:03.203 }, 00:05:03.203 "memory_domains": [ 00:05:03.203 { 00:05:03.203 "dma_device_id": "system", 00:05:03.203 "dma_device_type": 1 00:05:03.203 }, 00:05:03.203 { 00:05:03.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.203 "dma_device_type": 2 00:05:03.203 } 00:05:03.203 ], 00:05:03.203 "driver_specific": {} 00:05:03.203 }, 00:05:03.203 { 00:05:03.203 "name": "Passthru0", 00:05:03.203 "aliases": [ 00:05:03.203 "ab960415-5e6c-54b7-a62e-ffff58a176ff" 00:05:03.203 ], 00:05:03.203 "product_name": "passthru", 00:05:03.203 "block_size": 512, 00:05:03.203 "num_blocks": 16384, 00:05:03.203 "uuid": "ab960415-5e6c-54b7-a62e-ffff58a176ff", 00:05:03.203 "assigned_rate_limits": { 00:05:03.203 "rw_ios_per_sec": 0, 00:05:03.203 "rw_mbytes_per_sec": 0, 00:05:03.203 "r_mbytes_per_sec": 0, 00:05:03.203 "w_mbytes_per_sec": 0 00:05:03.203 }, 00:05:03.203 "claimed": false, 00:05:03.203 "zoned": false, 00:05:03.203 "supported_io_types": { 00:05:03.203 "read": true, 00:05:03.203 "write": true, 00:05:03.203 "unmap": true, 00:05:03.203 "flush": true, 00:05:03.203 "reset": true, 00:05:03.203 "nvme_admin": false, 00:05:03.203 "nvme_io": false, 00:05:03.203 "nvme_io_md": false, 00:05:03.203 "write_zeroes": true, 00:05:03.203 "zcopy": true, 00:05:03.203 "get_zone_info": false, 00:05:03.203 "zone_management": false, 00:05:03.203 "zone_append": false, 00:05:03.203 "compare": false, 00:05:03.203 "compare_and_write": false, 00:05:03.203 "abort": true, 00:05:03.203 "seek_hole": false, 00:05:03.203 "seek_data": false, 00:05:03.203 "copy": true, 00:05:03.203 "nvme_iov_md": false 00:05:03.203 }, 00:05:03.203 "memory_domains": [ 00:05:03.203 { 00:05:03.203 "dma_device_id": "system", 00:05:03.203 "dma_device_type": 1 00:05:03.203 }, 00:05:03.203 { 00:05:03.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.203 "dma_device_type": 2 00:05:03.203 } 00:05:03.203 ], 00:05:03.203 "driver_specific": { 00:05:03.203 "passthru": { 00:05:03.203 "name": "Passthru0", 00:05:03.203 "base_bdev_name": "Malloc2" 00:05:03.203 } 00:05:03.203 } 00:05:03.203 } 00:05:03.203 ]' 00:05:03.203 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.460 ************************************ 00:05:03.460 END TEST rpc_daemon_integrity 00:05:03.460 ************************************ 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.460 00:05:03.460 real 0m0.341s 00:05:03.460 user 0m0.203s 00:05:03.460 sys 0m0.045s 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.460 11:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.460 11:37:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.460 11:37:28 rpc -- rpc/rpc.sh@84 -- # killprocess 57073 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@952 -- # '[' -z 57073 ']' 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@956 -- # kill -0 57073 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@957 -- # uname 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57073 00:05:03.460 killing process with pid 57073 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57073' 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@971 -- # kill 57073 00:05:03.460 11:37:28 rpc -- common/autotest_common.sh@976 -- # wait 57073 00:05:06.781 00:05:06.781 real 0m5.407s 00:05:06.781 user 0m5.926s 00:05:06.781 sys 0m0.860s 00:05:06.781 11:37:31 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.781 11:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.781 ************************************ 00:05:06.781 END TEST rpc 00:05:06.781 ************************************ 00:05:06.781 11:37:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.781 11:37:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.781 11:37:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.781 11:37:31 -- common/autotest_common.sh@10 -- # set +x 00:05:06.781 ************************************ 00:05:06.781 START TEST skip_rpc 00:05:06.781 ************************************ 00:05:06.781 11:37:31 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:06.781 * Looking for test storage... 00:05:06.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.781 11:37:31 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.781 11:37:31 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.781 11:37:31 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.781 11:37:31 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:06.781 11:37:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.782 11:37:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:06.782 11:37:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.782 11:37:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.782 11:37:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.782 11:37:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.782 --rc genhtml_branch_coverage=1 00:05:06.782 --rc genhtml_function_coverage=1 00:05:06.782 --rc genhtml_legend=1 00:05:06.782 --rc geninfo_all_blocks=1 00:05:06.782 --rc geninfo_unexecuted_blocks=1 00:05:06.782 00:05:06.782 ' 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.782 --rc genhtml_branch_coverage=1 00:05:06.782 --rc genhtml_function_coverage=1 00:05:06.782 --rc genhtml_legend=1 00:05:06.782 --rc geninfo_all_blocks=1 00:05:06.782 --rc geninfo_unexecuted_blocks=1 00:05:06.782 00:05:06.782 ' 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.782 --rc genhtml_branch_coverage=1 00:05:06.782 --rc genhtml_function_coverage=1 00:05:06.782 --rc genhtml_legend=1 00:05:06.782 --rc geninfo_all_blocks=1 00:05:06.782 --rc geninfo_unexecuted_blocks=1 00:05:06.782 00:05:06.782 ' 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.782 --rc genhtml_branch_coverage=1 00:05:06.782 --rc genhtml_function_coverage=1 00:05:06.782 --rc genhtml_legend=1 00:05:06.782 --rc geninfo_all_blocks=1 00:05:06.782 --rc geninfo_unexecuted_blocks=1 00:05:06.782 00:05:06.782 ' 00:05:06.782 11:37:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.782 11:37:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.782 11:37:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.782 11:37:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.782 ************************************ 00:05:06.782 START TEST skip_rpc 00:05:06.782 ************************************ 00:05:06.782 11:37:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:06.782 11:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57302 00:05:06.782 11:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:06.782 11:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.782 11:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:06.782 [2024-11-04 11:37:31.904316] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:06.782 [2024-11-04 11:37:31.904478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57302 ] 00:05:06.782 [2024-11-04 11:37:32.083375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.782 [2024-11-04 11:37:32.204980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57302 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57302 ']' 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57302 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57302 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:12.054 killing process with pid 57302 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57302' 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57302 00:05:12.054 11:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57302 00:05:14.585 00:05:14.585 real 0m7.711s 00:05:14.585 user 0m7.212s 00:05:14.585 sys 0m0.402s 00:05:14.585 11:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.585 11:37:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.585 ************************************ 00:05:14.585 END TEST skip_rpc 00:05:14.585 ************************************ 00:05:14.585 11:37:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:14.585 11:37:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.585 11:37:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.585 11:37:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.585 ************************************ 00:05:14.585 START TEST skip_rpc_with_json 00:05:14.585 ************************************ 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57417 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57417 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57417 ']' 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.585 11:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.585 [2024-11-04 11:37:39.675087] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:14.585 [2024-11-04 11:37:39.675211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57417 ] 00:05:14.585 [2024-11-04 11:37:39.847907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.585 [2024-11-04 11:37:39.971536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.522 [2024-11-04 11:37:40.876381] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.522 request: 00:05:15.522 { 00:05:15.522 "trtype": "tcp", 00:05:15.522 "method": "nvmf_get_transports", 00:05:15.522 "req_id": 1 00:05:15.522 } 00:05:15.522 Got JSON-RPC error response 00:05:15.522 response: 00:05:15.522 { 00:05:15.522 "code": -19, 00:05:15.522 "message": "No such device" 00:05:15.522 } 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.522 [2024-11-04 11:37:40.888533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.522 11:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.781 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.781 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.781 { 00:05:15.781 "subsystems": [ 00:05:15.781 { 00:05:15.781 "subsystem": "fsdev", 00:05:15.781 "config": [ 00:05:15.781 { 00:05:15.781 "method": "fsdev_set_opts", 00:05:15.781 "params": { 00:05:15.781 "fsdev_io_pool_size": 65535, 00:05:15.781 "fsdev_io_cache_size": 256 00:05:15.781 } 00:05:15.781 } 00:05:15.781 ] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "keyring", 00:05:15.781 "config": [] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "iobuf", 00:05:15.781 "config": [ 00:05:15.781 { 00:05:15.781 "method": "iobuf_set_options", 00:05:15.781 "params": { 00:05:15.781 "small_pool_count": 8192, 00:05:15.781 "large_pool_count": 1024, 00:05:15.781 "small_bufsize": 8192, 00:05:15.781 "large_bufsize": 135168, 00:05:15.781 "enable_numa": false 00:05:15.781 } 00:05:15.781 } 00:05:15.781 ] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "sock", 00:05:15.781 "config": [ 00:05:15.781 { 00:05:15.781 "method": "sock_set_default_impl", 00:05:15.781 "params": { 00:05:15.781 "impl_name": "posix" 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "sock_impl_set_options", 00:05:15.781 "params": { 00:05:15.781 "impl_name": "ssl", 00:05:15.781 "recv_buf_size": 4096, 00:05:15.781 "send_buf_size": 4096, 00:05:15.781 "enable_recv_pipe": true, 00:05:15.781 "enable_quickack": false, 00:05:15.781 "enable_placement_id": 0, 00:05:15.781 "enable_zerocopy_send_server": true, 00:05:15.781 "enable_zerocopy_send_client": false, 00:05:15.781 "zerocopy_threshold": 0, 00:05:15.781 "tls_version": 0, 00:05:15.781 "enable_ktls": false 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "sock_impl_set_options", 00:05:15.781 "params": { 00:05:15.781 "impl_name": "posix", 00:05:15.781 "recv_buf_size": 2097152, 00:05:15.781 "send_buf_size": 2097152, 00:05:15.781 "enable_recv_pipe": true, 00:05:15.781 "enable_quickack": false, 00:05:15.781 "enable_placement_id": 0, 00:05:15.781 "enable_zerocopy_send_server": true, 00:05:15.781 "enable_zerocopy_send_client": false, 00:05:15.781 "zerocopy_threshold": 0, 00:05:15.781 "tls_version": 0, 00:05:15.781 "enable_ktls": false 00:05:15.781 } 00:05:15.781 } 00:05:15.781 ] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "vmd", 00:05:15.781 "config": [] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "accel", 00:05:15.781 "config": [ 00:05:15.781 { 00:05:15.781 "method": "accel_set_options", 00:05:15.781 "params": { 00:05:15.781 "small_cache_size": 128, 00:05:15.781 "large_cache_size": 16, 00:05:15.781 "task_count": 2048, 00:05:15.781 "sequence_count": 2048, 00:05:15.781 "buf_count": 2048 00:05:15.781 } 00:05:15.781 } 00:05:15.781 ] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "bdev", 00:05:15.781 "config": [ 00:05:15.781 { 00:05:15.781 "method": "bdev_set_options", 00:05:15.781 "params": { 00:05:15.781 "bdev_io_pool_size": 65535, 00:05:15.781 "bdev_io_cache_size": 256, 00:05:15.781 "bdev_auto_examine": true, 00:05:15.781 "iobuf_small_cache_size": 128, 00:05:15.781 "iobuf_large_cache_size": 16 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "bdev_raid_set_options", 00:05:15.781 "params": { 00:05:15.781 "process_window_size_kb": 1024, 00:05:15.781 "process_max_bandwidth_mb_sec": 0 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "bdev_iscsi_set_options", 00:05:15.781 "params": { 00:05:15.781 "timeout_sec": 30 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "bdev_nvme_set_options", 00:05:15.781 "params": { 00:05:15.781 "action_on_timeout": "none", 00:05:15.781 "timeout_us": 0, 00:05:15.781 "timeout_admin_us": 0, 00:05:15.781 "keep_alive_timeout_ms": 10000, 00:05:15.781 "arbitration_burst": 0, 00:05:15.781 "low_priority_weight": 0, 00:05:15.781 "medium_priority_weight": 0, 00:05:15.781 "high_priority_weight": 0, 00:05:15.781 "nvme_adminq_poll_period_us": 10000, 00:05:15.781 "nvme_ioq_poll_period_us": 0, 00:05:15.781 "io_queue_requests": 0, 00:05:15.781 "delay_cmd_submit": true, 00:05:15.781 "transport_retry_count": 4, 00:05:15.781 "bdev_retry_count": 3, 00:05:15.781 "transport_ack_timeout": 0, 00:05:15.781 "ctrlr_loss_timeout_sec": 0, 00:05:15.781 "reconnect_delay_sec": 0, 00:05:15.781 "fast_io_fail_timeout_sec": 0, 00:05:15.781 "disable_auto_failback": false, 00:05:15.781 "generate_uuids": false, 00:05:15.781 "transport_tos": 0, 00:05:15.781 "nvme_error_stat": false, 00:05:15.781 "rdma_srq_size": 0, 00:05:15.781 "io_path_stat": false, 00:05:15.781 "allow_accel_sequence": false, 00:05:15.781 "rdma_max_cq_size": 0, 00:05:15.781 "rdma_cm_event_timeout_ms": 0, 00:05:15.781 "dhchap_digests": [ 00:05:15.781 "sha256", 00:05:15.781 "sha384", 00:05:15.781 "sha512" 00:05:15.781 ], 00:05:15.781 "dhchap_dhgroups": [ 00:05:15.781 "null", 00:05:15.781 "ffdhe2048", 00:05:15.781 "ffdhe3072", 00:05:15.781 "ffdhe4096", 00:05:15.781 "ffdhe6144", 00:05:15.781 "ffdhe8192" 00:05:15.781 ] 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "bdev_nvme_set_hotplug", 00:05:15.781 "params": { 00:05:15.781 "period_us": 100000, 00:05:15.781 "enable": false 00:05:15.781 } 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "method": "bdev_wait_for_examine" 00:05:15.781 } 00:05:15.781 ] 00:05:15.781 }, 00:05:15.781 { 00:05:15.781 "subsystem": "scsi", 00:05:15.781 "config": null 00:05:15.781 }, 00:05:15.781 { 00:05:15.782 "subsystem": "scheduler", 00:05:15.782 "config": [ 00:05:15.782 { 00:05:15.782 "method": "framework_set_scheduler", 00:05:15.782 "params": { 00:05:15.782 "name": "static" 00:05:15.782 } 00:05:15.782 } 00:05:15.782 ] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "vhost_scsi", 00:05:15.782 "config": [] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "vhost_blk", 00:05:15.782 "config": [] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "ublk", 00:05:15.782 "config": [] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "nbd", 00:05:15.782 "config": [] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "nvmf", 00:05:15.782 "config": [ 00:05:15.782 { 00:05:15.782 "method": "nvmf_set_config", 00:05:15.782 "params": { 00:05:15.782 "discovery_filter": "match_any", 00:05:15.782 "admin_cmd_passthru": { 00:05:15.782 "identify_ctrlr": false 00:05:15.782 }, 00:05:15.782 "dhchap_digests": [ 00:05:15.782 "sha256", 00:05:15.782 "sha384", 00:05:15.782 "sha512" 00:05:15.782 ], 00:05:15.782 "dhchap_dhgroups": [ 00:05:15.782 "null", 00:05:15.782 "ffdhe2048", 00:05:15.782 "ffdhe3072", 00:05:15.782 "ffdhe4096", 00:05:15.782 "ffdhe6144", 00:05:15.782 "ffdhe8192" 00:05:15.782 ] 00:05:15.782 } 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "method": "nvmf_set_max_subsystems", 00:05:15.782 "params": { 00:05:15.782 "max_subsystems": 1024 00:05:15.782 } 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "method": "nvmf_set_crdt", 00:05:15.782 "params": { 00:05:15.782 "crdt1": 0, 00:05:15.782 "crdt2": 0, 00:05:15.782 "crdt3": 0 00:05:15.782 } 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "method": "nvmf_create_transport", 00:05:15.782 "params": { 00:05:15.782 "trtype": "TCP", 00:05:15.782 "max_queue_depth": 128, 00:05:15.782 "max_io_qpairs_per_ctrlr": 127, 00:05:15.782 "in_capsule_data_size": 4096, 00:05:15.782 "max_io_size": 131072, 00:05:15.782 "io_unit_size": 131072, 00:05:15.782 "max_aq_depth": 128, 00:05:15.782 "num_shared_buffers": 511, 00:05:15.782 "buf_cache_size": 4294967295, 00:05:15.782 "dif_insert_or_strip": false, 00:05:15.782 "zcopy": false, 00:05:15.782 "c2h_success": true, 00:05:15.782 "sock_priority": 0, 00:05:15.782 "abort_timeout_sec": 1, 00:05:15.782 "ack_timeout": 0, 00:05:15.782 "data_wr_pool_size": 0 00:05:15.782 } 00:05:15.782 } 00:05:15.782 ] 00:05:15.782 }, 00:05:15.782 { 00:05:15.782 "subsystem": "iscsi", 00:05:15.782 "config": [ 00:05:15.782 { 00:05:15.782 "method": "iscsi_set_options", 00:05:15.782 "params": { 00:05:15.782 "node_base": "iqn.2016-06.io.spdk", 00:05:15.782 "max_sessions": 128, 00:05:15.782 "max_connections_per_session": 2, 00:05:15.782 "max_queue_depth": 64, 00:05:15.782 "default_time2wait": 2, 00:05:15.782 "default_time2retain": 20, 00:05:15.782 "first_burst_length": 8192, 00:05:15.782 "immediate_data": true, 00:05:15.782 "allow_duplicated_isid": false, 00:05:15.782 "error_recovery_level": 0, 00:05:15.782 "nop_timeout": 60, 00:05:15.782 "nop_in_interval": 30, 00:05:15.782 "disable_chap": false, 00:05:15.782 "require_chap": false, 00:05:15.782 "mutual_chap": false, 00:05:15.782 "chap_group": 0, 00:05:15.782 "max_large_datain_per_connection": 64, 00:05:15.782 "max_r2t_per_connection": 4, 00:05:15.782 "pdu_pool_size": 36864, 00:05:15.782 "immediate_data_pool_size": 16384, 00:05:15.782 "data_out_pool_size": 2048 00:05:15.782 } 00:05:15.782 } 00:05:15.782 ] 00:05:15.782 } 00:05:15.782 ] 00:05:15.782 } 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57417 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57417 ']' 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57417 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57417 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:15.782 killing process with pid 57417 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57417' 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57417 00:05:15.782 11:37:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57417 00:05:18.320 11:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57473 00:05:18.320 11:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.320 11:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57473 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57473 ']' 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57473 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57473 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57473' 00:05:23.608 killing process with pid 57473 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57473 00:05:23.608 11:37:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57473 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:26.145 ************************************ 00:05:26.145 END TEST skip_rpc_with_json 00:05:26.145 ************************************ 00:05:26.145 00:05:26.145 real 0m11.796s 00:05:26.145 user 0m11.223s 00:05:26.145 sys 0m0.851s 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.145 11:37:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.145 11:37:51 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.145 11:37:51 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.145 11:37:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.145 ************************************ 00:05:26.145 START TEST skip_rpc_with_delay 00:05:26.145 ************************************ 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:26.145 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.145 [2024-11-04 11:37:51.546390] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.146 00:05:26.146 real 0m0.176s 00:05:26.146 user 0m0.091s 00:05:26.146 sys 0m0.083s 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.146 11:37:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.146 ************************************ 00:05:26.146 END TEST skip_rpc_with_delay 00:05:26.146 ************************************ 00:05:26.146 11:37:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.406 11:37:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.406 11:37:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.406 11:37:51 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.406 11:37:51 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.406 11:37:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.406 ************************************ 00:05:26.406 START TEST exit_on_failed_rpc_init 00:05:26.406 ************************************ 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57607 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57607 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57607 ']' 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.406 11:37:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.406 [2024-11-04 11:37:51.784841] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:26.406 [2024-11-04 11:37:51.784974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57607 ] 00:05:26.666 [2024-11-04 11:37:51.958235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.666 [2024-11-04 11:37:52.080057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.604 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.863 [2024-11-04 11:37:53.140590] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:27.863 [2024-11-04 11:37:53.140712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57630 ] 00:05:27.863 [2024-11-04 11:37:53.315589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.122 [2024-11-04 11:37:53.448768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.122 [2024-11-04 11:37:53.448894] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.122 [2024-11-04 11:37:53.448926] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.122 [2024-11-04 11:37:53.448948] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57607 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57607 ']' 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57607 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57607 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.381 killing process with pid 57607 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57607' 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57607 00:05:28.381 11:37:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57607 00:05:30.916 00:05:30.916 real 0m4.567s 00:05:30.916 user 0m5.032s 00:05:30.916 sys 0m0.562s 00:05:30.916 11:37:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.916 11:37:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 ************************************ 00:05:30.916 END TEST exit_on_failed_rpc_init 00:05:30.916 ************************************ 00:05:30.916 11:37:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.916 00:05:30.916 real 0m24.715s 00:05:30.916 user 0m23.752s 00:05:30.916 sys 0m2.187s 00:05:30.916 11:37:56 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.916 11:37:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 ************************************ 00:05:30.916 END TEST skip_rpc 00:05:30.916 ************************************ 00:05:30.916 11:37:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.916 11:37:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.916 11:37:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.916 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.916 ************************************ 00:05:30.916 START TEST rpc_client 00:05:30.916 ************************************ 00:05:30.916 11:37:56 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.176 * Looking for test storage... 00:05:31.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.176 11:37:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.176 --rc genhtml_branch_coverage=1 00:05:31.176 --rc genhtml_function_coverage=1 00:05:31.176 --rc genhtml_legend=1 00:05:31.176 --rc geninfo_all_blocks=1 00:05:31.176 --rc geninfo_unexecuted_blocks=1 00:05:31.176 00:05:31.176 ' 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.176 --rc genhtml_branch_coverage=1 00:05:31.176 --rc genhtml_function_coverage=1 00:05:31.176 --rc genhtml_legend=1 00:05:31.176 --rc geninfo_all_blocks=1 00:05:31.176 --rc geninfo_unexecuted_blocks=1 00:05:31.176 00:05:31.176 ' 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.176 --rc genhtml_branch_coverage=1 00:05:31.176 --rc genhtml_function_coverage=1 00:05:31.176 --rc genhtml_legend=1 00:05:31.176 --rc geninfo_all_blocks=1 00:05:31.176 --rc geninfo_unexecuted_blocks=1 00:05:31.176 00:05:31.176 ' 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.176 --rc genhtml_branch_coverage=1 00:05:31.176 --rc genhtml_function_coverage=1 00:05:31.176 --rc genhtml_legend=1 00:05:31.176 --rc geninfo_all_blocks=1 00:05:31.176 --rc geninfo_unexecuted_blocks=1 00:05:31.176 00:05:31.176 ' 00:05:31.176 11:37:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.176 OK 00:05:31.176 11:37:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.176 00:05:31.176 real 0m0.287s 00:05:31.176 user 0m0.153s 00:05:31.176 sys 0m0.150s 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.176 11:37:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.176 ************************************ 00:05:31.176 END TEST rpc_client 00:05:31.176 ************************************ 00:05:31.437 11:37:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.437 11:37:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.437 11:37:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.437 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.437 ************************************ 00:05:31.437 START TEST json_config 00:05:31.437 ************************************ 00:05:31.437 11:37:56 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.437 11:37:56 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.437 11:37:56 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.437 11:37:56 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.437 11:37:56 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.437 11:37:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.437 11:37:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.437 11:37:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.437 11:37:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.437 11:37:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.437 11:37:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:31.437 11:37:56 json_config -- scripts/common.sh@345 -- # : 1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.437 11:37:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.437 11:37:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@353 -- # local d=1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.437 11:37:56 json_config -- scripts/common.sh@355 -- # echo 1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.437 11:37:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@353 -- # local d=2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.437 11:37:56 json_config -- scripts/common.sh@355 -- # echo 2 00:05:31.437 11:37:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.438 11:37:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.438 11:37:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.438 11:37:56 json_config -- scripts/common.sh@368 -- # return 0 00:05:31.438 11:37:56 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.438 11:37:56 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.438 --rc genhtml_branch_coverage=1 00:05:31.438 --rc genhtml_function_coverage=1 00:05:31.438 --rc genhtml_legend=1 00:05:31.438 --rc geninfo_all_blocks=1 00:05:31.438 --rc geninfo_unexecuted_blocks=1 00:05:31.438 00:05:31.438 ' 00:05:31.438 11:37:56 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.438 --rc genhtml_branch_coverage=1 00:05:31.438 --rc genhtml_function_coverage=1 00:05:31.438 --rc genhtml_legend=1 00:05:31.438 --rc geninfo_all_blocks=1 00:05:31.438 --rc geninfo_unexecuted_blocks=1 00:05:31.438 00:05:31.438 ' 00:05:31.438 11:37:56 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.438 --rc genhtml_branch_coverage=1 00:05:31.438 --rc genhtml_function_coverage=1 00:05:31.438 --rc genhtml_legend=1 00:05:31.438 --rc geninfo_all_blocks=1 00:05:31.438 --rc geninfo_unexecuted_blocks=1 00:05:31.438 00:05:31.438 ' 00:05:31.438 11:37:56 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.438 --rc genhtml_branch_coverage=1 00:05:31.438 --rc genhtml_function_coverage=1 00:05:31.438 --rc genhtml_legend=1 00:05:31.438 --rc geninfo_all_blocks=1 00:05:31.438 --rc geninfo_unexecuted_blocks=1 00:05:31.438 00:05:31.438 ' 00:05:31.438 11:37:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8d4e942-011b-4e07-bdf8-d00d699eab30 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b8d4e942-011b-4e07-bdf8-d00d699eab30 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.438 11:37:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.438 11:37:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.438 11:37:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.438 11:37:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.438 11:37:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.438 11:37:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.438 11:37:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.438 11:37:56 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.438 11:37:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@51 -- # : 0 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.438 11:37:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.438 11:37:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.438 11:37:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.438 11:37:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.438 11:37:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.439 11:37:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.439 WARNING: No tests are enabled so not running JSON configuration tests 00:05:31.439 11:37:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:31.439 11:37:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:31.439 00:05:31.439 real 0m0.225s 00:05:31.439 user 0m0.136s 00:05:31.439 sys 0m0.096s 00:05:31.439 11:37:56 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.439 11:37:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.439 ************************************ 00:05:31.439 END TEST json_config 00:05:31.439 ************************************ 00:05:31.699 11:37:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.699 11:37:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.699 11:37:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.700 11:37:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.700 ************************************ 00:05:31.700 START TEST json_config_extra_key 00:05:31.700 ************************************ 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.700 --rc genhtml_branch_coverage=1 00:05:31.700 --rc genhtml_function_coverage=1 00:05:31.700 --rc genhtml_legend=1 00:05:31.700 --rc geninfo_all_blocks=1 00:05:31.700 --rc geninfo_unexecuted_blocks=1 00:05:31.700 00:05:31.700 ' 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.700 --rc genhtml_branch_coverage=1 00:05:31.700 --rc genhtml_function_coverage=1 00:05:31.700 --rc genhtml_legend=1 00:05:31.700 --rc geninfo_all_blocks=1 00:05:31.700 --rc geninfo_unexecuted_blocks=1 00:05:31.700 00:05:31.700 ' 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.700 --rc genhtml_branch_coverage=1 00:05:31.700 --rc genhtml_function_coverage=1 00:05:31.700 --rc genhtml_legend=1 00:05:31.700 --rc geninfo_all_blocks=1 00:05:31.700 --rc geninfo_unexecuted_blocks=1 00:05:31.700 00:05:31.700 ' 00:05:31.700 11:37:57 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.700 --rc genhtml_branch_coverage=1 00:05:31.700 --rc genhtml_function_coverage=1 00:05:31.700 --rc genhtml_legend=1 00:05:31.700 --rc geninfo_all_blocks=1 00:05:31.700 --rc geninfo_unexecuted_blocks=1 00:05:31.700 00:05:31.700 ' 00:05:31.700 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b8d4e942-011b-4e07-bdf8-d00d699eab30 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b8d4e942-011b-4e07-bdf8-d00d699eab30 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.700 11:37:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.700 11:37:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.700 11:37:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.700 11:37:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.700 11:37:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.700 11:37:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.700 11:37:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.701 INFO: launching applications... 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.701 11:37:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57840 00:05:31.701 Waiting for target to run... 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57840 /var/tmp/spdk_tgt.sock 00:05:31.701 11:37:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57840 ']' 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.701 11:37:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.960 [2024-11-04 11:37:57.311548] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:31.960 [2024-11-04 11:37:57.311664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57840 ] 00:05:32.220 [2024-11-04 11:37:57.691606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.480 [2024-11-04 11:37:57.791863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.049 11:37:58 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.049 11:37:58 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:33.049 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:33.049 INFO: shutting down applications... 00:05:33.049 11:37:58 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:33.049 11:37:58 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57840 ]] 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57840 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:33.049 11:37:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.616 11:37:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.616 11:37:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.616 11:37:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:33.616 11:37:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.203 11:37:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.203 11:37:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.203 11:37:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:34.203 11:37:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.772 11:38:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.772 11:38:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.772 11:38:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:34.772 11:38:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.032 11:38:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.032 11:38:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.032 11:38:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:35.032 11:38:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.601 11:38:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.601 11:38:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.601 11:38:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:35.601 11:38:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57840 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.172 SPDK target shutdown done 00:05:36.172 11:38:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.172 Success 00:05:36.172 11:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.172 ************************************ 00:05:36.172 END TEST json_config_extra_key 00:05:36.172 ************************************ 00:05:36.172 00:05:36.172 real 0m4.541s 00:05:36.172 user 0m4.056s 00:05:36.172 sys 0m0.540s 00:05:36.172 11:38:01 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.172 11:38:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.172 11:38:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.172 11:38:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.172 11:38:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.172 11:38:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.172 ************************************ 00:05:36.172 START TEST alias_rpc 00:05:36.172 ************************************ 00:05:36.172 11:38:01 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.432 * Looking for test storage... 00:05:36.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:36.432 11:38:01 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.432 11:38:01 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.432 11:38:01 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.432 11:38:01 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.432 11:38:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.433 11:38:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.433 --rc genhtml_branch_coverage=1 00:05:36.433 --rc genhtml_function_coverage=1 00:05:36.433 --rc genhtml_legend=1 00:05:36.433 --rc geninfo_all_blocks=1 00:05:36.433 --rc geninfo_unexecuted_blocks=1 00:05:36.433 00:05:36.433 ' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.433 --rc genhtml_branch_coverage=1 00:05:36.433 --rc genhtml_function_coverage=1 00:05:36.433 --rc genhtml_legend=1 00:05:36.433 --rc geninfo_all_blocks=1 00:05:36.433 --rc geninfo_unexecuted_blocks=1 00:05:36.433 00:05:36.433 ' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.433 --rc genhtml_branch_coverage=1 00:05:36.433 --rc genhtml_function_coverage=1 00:05:36.433 --rc genhtml_legend=1 00:05:36.433 --rc geninfo_all_blocks=1 00:05:36.433 --rc geninfo_unexecuted_blocks=1 00:05:36.433 00:05:36.433 ' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.433 --rc genhtml_branch_coverage=1 00:05:36.433 --rc genhtml_function_coverage=1 00:05:36.433 --rc genhtml_legend=1 00:05:36.433 --rc geninfo_all_blocks=1 00:05:36.433 --rc geninfo_unexecuted_blocks=1 00:05:36.433 00:05:36.433 ' 00:05:36.433 11:38:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.433 11:38:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57946 00:05:36.433 11:38:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.433 11:38:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57946 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57946 ']' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.433 11:38:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 [2024-11-04 11:38:01.930326] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:36.433 [2024-11-04 11:38:01.930462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57946 ] 00:05:36.693 [2024-11-04 11:38:02.104694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.952 [2024-11-04 11:38:02.221585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:37.892 11:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:37.892 11:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57946 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57946 ']' 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57946 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57946 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:37.892 killing process with pid 57946 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57946' 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@971 -- # kill 57946 00:05:37.892 11:38:03 alias_rpc -- common/autotest_common.sh@976 -- # wait 57946 00:05:40.432 ************************************ 00:05:40.432 END TEST alias_rpc 00:05:40.432 ************************************ 00:05:40.432 00:05:40.432 real 0m4.173s 00:05:40.432 user 0m4.160s 00:05:40.432 sys 0m0.581s 00:05:40.432 11:38:05 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.432 11:38:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.432 11:38:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:40.432 11:38:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:40.432 11:38:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.432 11:38:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.432 11:38:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.432 ************************************ 00:05:40.432 START TEST spdkcli_tcp 00:05:40.432 ************************************ 00:05:40.432 11:38:05 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:40.692 * Looking for test storage... 00:05:40.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:40.693 11:38:05 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:40.693 11:38:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:40.693 11:38:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.693 11:38:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.693 --rc genhtml_branch_coverage=1 00:05:40.693 --rc genhtml_function_coverage=1 00:05:40.693 --rc genhtml_legend=1 00:05:40.693 --rc geninfo_all_blocks=1 00:05:40.693 --rc geninfo_unexecuted_blocks=1 00:05:40.693 00:05:40.693 ' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.693 --rc genhtml_branch_coverage=1 00:05:40.693 --rc genhtml_function_coverage=1 00:05:40.693 --rc genhtml_legend=1 00:05:40.693 --rc geninfo_all_blocks=1 00:05:40.693 --rc geninfo_unexecuted_blocks=1 00:05:40.693 00:05:40.693 ' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.693 --rc genhtml_branch_coverage=1 00:05:40.693 --rc genhtml_function_coverage=1 00:05:40.693 --rc genhtml_legend=1 00:05:40.693 --rc geninfo_all_blocks=1 00:05:40.693 --rc geninfo_unexecuted_blocks=1 00:05:40.693 00:05:40.693 ' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.693 --rc genhtml_branch_coverage=1 00:05:40.693 --rc genhtml_function_coverage=1 00:05:40.693 --rc genhtml_legend=1 00:05:40.693 --rc geninfo_all_blocks=1 00:05:40.693 --rc geninfo_unexecuted_blocks=1 00:05:40.693 00:05:40.693 ' 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58058 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:40.693 11:38:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58058 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58058 ']' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.693 11:38:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:40.693 [2024-11-04 11:38:06.188456] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:40.693 [2024-11-04 11:38:06.188576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:05:40.954 [2024-11-04 11:38:06.363871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.213 [2024-11-04 11:38:06.482037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.213 [2024-11-04 11:38:06.482076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.154 11:38:07 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.154 11:38:07 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:42.154 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58076 00:05:42.154 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.154 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.154 [ 00:05:42.154 "bdev_malloc_delete", 00:05:42.154 "bdev_malloc_create", 00:05:42.154 "bdev_null_resize", 00:05:42.154 "bdev_null_delete", 00:05:42.154 "bdev_null_create", 00:05:42.154 "bdev_nvme_cuse_unregister", 00:05:42.154 "bdev_nvme_cuse_register", 00:05:42.154 "bdev_opal_new_user", 00:05:42.154 "bdev_opal_set_lock_state", 00:05:42.154 "bdev_opal_delete", 00:05:42.154 "bdev_opal_get_info", 00:05:42.154 "bdev_opal_create", 00:05:42.154 "bdev_nvme_opal_revert", 00:05:42.154 "bdev_nvme_opal_init", 00:05:42.154 "bdev_nvme_send_cmd", 00:05:42.154 "bdev_nvme_set_keys", 00:05:42.154 "bdev_nvme_get_path_iostat", 00:05:42.154 "bdev_nvme_get_mdns_discovery_info", 00:05:42.154 "bdev_nvme_stop_mdns_discovery", 00:05:42.154 "bdev_nvme_start_mdns_discovery", 00:05:42.154 "bdev_nvme_set_multipath_policy", 00:05:42.154 "bdev_nvme_set_preferred_path", 00:05:42.154 "bdev_nvme_get_io_paths", 00:05:42.154 "bdev_nvme_remove_error_injection", 00:05:42.154 "bdev_nvme_add_error_injection", 00:05:42.154 "bdev_nvme_get_discovery_info", 00:05:42.154 "bdev_nvme_stop_discovery", 00:05:42.154 "bdev_nvme_start_discovery", 00:05:42.154 "bdev_nvme_get_controller_health_info", 00:05:42.154 "bdev_nvme_disable_controller", 00:05:42.154 "bdev_nvme_enable_controller", 00:05:42.154 "bdev_nvme_reset_controller", 00:05:42.154 "bdev_nvme_get_transport_statistics", 00:05:42.154 "bdev_nvme_apply_firmware", 00:05:42.154 "bdev_nvme_detach_controller", 00:05:42.154 "bdev_nvme_get_controllers", 00:05:42.154 "bdev_nvme_attach_controller", 00:05:42.154 "bdev_nvme_set_hotplug", 00:05:42.154 "bdev_nvme_set_options", 00:05:42.154 "bdev_passthru_delete", 00:05:42.154 "bdev_passthru_create", 00:05:42.154 "bdev_lvol_set_parent_bdev", 00:05:42.154 "bdev_lvol_set_parent", 00:05:42.154 "bdev_lvol_check_shallow_copy", 00:05:42.154 "bdev_lvol_start_shallow_copy", 00:05:42.154 "bdev_lvol_grow_lvstore", 00:05:42.154 "bdev_lvol_get_lvols", 00:05:42.154 "bdev_lvol_get_lvstores", 00:05:42.154 "bdev_lvol_delete", 00:05:42.154 "bdev_lvol_set_read_only", 00:05:42.154 "bdev_lvol_resize", 00:05:42.154 "bdev_lvol_decouple_parent", 00:05:42.154 "bdev_lvol_inflate", 00:05:42.154 "bdev_lvol_rename", 00:05:42.154 "bdev_lvol_clone_bdev", 00:05:42.154 "bdev_lvol_clone", 00:05:42.155 "bdev_lvol_snapshot", 00:05:42.155 "bdev_lvol_create", 00:05:42.155 "bdev_lvol_delete_lvstore", 00:05:42.155 "bdev_lvol_rename_lvstore", 00:05:42.155 "bdev_lvol_create_lvstore", 00:05:42.155 "bdev_raid_set_options", 00:05:42.155 "bdev_raid_remove_base_bdev", 00:05:42.155 "bdev_raid_add_base_bdev", 00:05:42.155 "bdev_raid_delete", 00:05:42.155 "bdev_raid_create", 00:05:42.155 "bdev_raid_get_bdevs", 00:05:42.155 "bdev_error_inject_error", 00:05:42.155 "bdev_error_delete", 00:05:42.155 "bdev_error_create", 00:05:42.155 "bdev_split_delete", 00:05:42.155 "bdev_split_create", 00:05:42.155 "bdev_delay_delete", 00:05:42.155 "bdev_delay_create", 00:05:42.155 "bdev_delay_update_latency", 00:05:42.155 "bdev_zone_block_delete", 00:05:42.155 "bdev_zone_block_create", 00:05:42.155 "blobfs_create", 00:05:42.155 "blobfs_detect", 00:05:42.155 "blobfs_set_cache_size", 00:05:42.155 "bdev_aio_delete", 00:05:42.155 "bdev_aio_rescan", 00:05:42.155 "bdev_aio_create", 00:05:42.155 "bdev_ftl_set_property", 00:05:42.155 "bdev_ftl_get_properties", 00:05:42.155 "bdev_ftl_get_stats", 00:05:42.155 "bdev_ftl_unmap", 00:05:42.155 "bdev_ftl_unload", 00:05:42.155 "bdev_ftl_delete", 00:05:42.155 "bdev_ftl_load", 00:05:42.155 "bdev_ftl_create", 00:05:42.155 "bdev_virtio_attach_controller", 00:05:42.155 "bdev_virtio_scsi_get_devices", 00:05:42.155 "bdev_virtio_detach_controller", 00:05:42.155 "bdev_virtio_blk_set_hotplug", 00:05:42.155 "bdev_iscsi_delete", 00:05:42.155 "bdev_iscsi_create", 00:05:42.155 "bdev_iscsi_set_options", 00:05:42.155 "accel_error_inject_error", 00:05:42.155 "ioat_scan_accel_module", 00:05:42.155 "dsa_scan_accel_module", 00:05:42.155 "iaa_scan_accel_module", 00:05:42.155 "keyring_file_remove_key", 00:05:42.155 "keyring_file_add_key", 00:05:42.155 "keyring_linux_set_options", 00:05:42.155 "fsdev_aio_delete", 00:05:42.155 "fsdev_aio_create", 00:05:42.155 "iscsi_get_histogram", 00:05:42.155 "iscsi_enable_histogram", 00:05:42.155 "iscsi_set_options", 00:05:42.155 "iscsi_get_auth_groups", 00:05:42.155 "iscsi_auth_group_remove_secret", 00:05:42.155 "iscsi_auth_group_add_secret", 00:05:42.155 "iscsi_delete_auth_group", 00:05:42.155 "iscsi_create_auth_group", 00:05:42.155 "iscsi_set_discovery_auth", 00:05:42.155 "iscsi_get_options", 00:05:42.155 "iscsi_target_node_request_logout", 00:05:42.155 "iscsi_target_node_set_redirect", 00:05:42.155 "iscsi_target_node_set_auth", 00:05:42.155 "iscsi_target_node_add_lun", 00:05:42.155 "iscsi_get_stats", 00:05:42.155 "iscsi_get_connections", 00:05:42.155 "iscsi_portal_group_set_auth", 00:05:42.155 "iscsi_start_portal_group", 00:05:42.155 "iscsi_delete_portal_group", 00:05:42.155 "iscsi_create_portal_group", 00:05:42.155 "iscsi_get_portal_groups", 00:05:42.155 "iscsi_delete_target_node", 00:05:42.155 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.155 "iscsi_target_node_add_pg_ig_maps", 00:05:42.155 "iscsi_create_target_node", 00:05:42.155 "iscsi_get_target_nodes", 00:05:42.155 "iscsi_delete_initiator_group", 00:05:42.155 "iscsi_initiator_group_remove_initiators", 00:05:42.155 "iscsi_initiator_group_add_initiators", 00:05:42.155 "iscsi_create_initiator_group", 00:05:42.155 "iscsi_get_initiator_groups", 00:05:42.155 "nvmf_set_crdt", 00:05:42.155 "nvmf_set_config", 00:05:42.155 "nvmf_set_max_subsystems", 00:05:42.155 "nvmf_stop_mdns_prr", 00:05:42.155 "nvmf_publish_mdns_prr", 00:05:42.155 "nvmf_subsystem_get_listeners", 00:05:42.155 "nvmf_subsystem_get_qpairs", 00:05:42.155 "nvmf_subsystem_get_controllers", 00:05:42.155 "nvmf_get_stats", 00:05:42.155 "nvmf_get_transports", 00:05:42.155 "nvmf_create_transport", 00:05:42.155 "nvmf_get_targets", 00:05:42.155 "nvmf_delete_target", 00:05:42.155 "nvmf_create_target", 00:05:42.155 "nvmf_subsystem_allow_any_host", 00:05:42.155 "nvmf_subsystem_set_keys", 00:05:42.155 "nvmf_subsystem_remove_host", 00:05:42.155 "nvmf_subsystem_add_host", 00:05:42.155 "nvmf_ns_remove_host", 00:05:42.155 "nvmf_ns_add_host", 00:05:42.155 "nvmf_subsystem_remove_ns", 00:05:42.155 "nvmf_subsystem_set_ns_ana_group", 00:05:42.155 "nvmf_subsystem_add_ns", 00:05:42.155 "nvmf_subsystem_listener_set_ana_state", 00:05:42.155 "nvmf_discovery_get_referrals", 00:05:42.155 "nvmf_discovery_remove_referral", 00:05:42.155 "nvmf_discovery_add_referral", 00:05:42.155 "nvmf_subsystem_remove_listener", 00:05:42.155 "nvmf_subsystem_add_listener", 00:05:42.155 "nvmf_delete_subsystem", 00:05:42.155 "nvmf_create_subsystem", 00:05:42.155 "nvmf_get_subsystems", 00:05:42.155 "env_dpdk_get_mem_stats", 00:05:42.155 "nbd_get_disks", 00:05:42.155 "nbd_stop_disk", 00:05:42.155 "nbd_start_disk", 00:05:42.155 "ublk_recover_disk", 00:05:42.155 "ublk_get_disks", 00:05:42.155 "ublk_stop_disk", 00:05:42.155 "ublk_start_disk", 00:05:42.155 "ublk_destroy_target", 00:05:42.156 "ublk_create_target", 00:05:42.156 "virtio_blk_create_transport", 00:05:42.156 "virtio_blk_get_transports", 00:05:42.156 "vhost_controller_set_coalescing", 00:05:42.156 "vhost_get_controllers", 00:05:42.156 "vhost_delete_controller", 00:05:42.156 "vhost_create_blk_controller", 00:05:42.156 "vhost_scsi_controller_remove_target", 00:05:42.156 "vhost_scsi_controller_add_target", 00:05:42.156 "vhost_start_scsi_controller", 00:05:42.156 "vhost_create_scsi_controller", 00:05:42.156 "thread_set_cpumask", 00:05:42.156 "scheduler_set_options", 00:05:42.156 "framework_get_governor", 00:05:42.156 "framework_get_scheduler", 00:05:42.156 "framework_set_scheduler", 00:05:42.156 "framework_get_reactors", 00:05:42.156 "thread_get_io_channels", 00:05:42.156 "thread_get_pollers", 00:05:42.156 "thread_get_stats", 00:05:42.156 "framework_monitor_context_switch", 00:05:42.156 "spdk_kill_instance", 00:05:42.156 "log_enable_timestamps", 00:05:42.156 "log_get_flags", 00:05:42.156 "log_clear_flag", 00:05:42.156 "log_set_flag", 00:05:42.156 "log_get_level", 00:05:42.156 "log_set_level", 00:05:42.156 "log_get_print_level", 00:05:42.156 "log_set_print_level", 00:05:42.156 "framework_enable_cpumask_locks", 00:05:42.156 "framework_disable_cpumask_locks", 00:05:42.156 "framework_wait_init", 00:05:42.156 "framework_start_init", 00:05:42.156 "scsi_get_devices", 00:05:42.156 "bdev_get_histogram", 00:05:42.156 "bdev_enable_histogram", 00:05:42.156 "bdev_set_qos_limit", 00:05:42.156 "bdev_set_qd_sampling_period", 00:05:42.156 "bdev_get_bdevs", 00:05:42.156 "bdev_reset_iostat", 00:05:42.156 "bdev_get_iostat", 00:05:42.156 "bdev_examine", 00:05:42.156 "bdev_wait_for_examine", 00:05:42.156 "bdev_set_options", 00:05:42.156 "accel_get_stats", 00:05:42.156 "accel_set_options", 00:05:42.156 "accel_set_driver", 00:05:42.156 "accel_crypto_key_destroy", 00:05:42.156 "accel_crypto_keys_get", 00:05:42.156 "accel_crypto_key_create", 00:05:42.156 "accel_assign_opc", 00:05:42.156 "accel_get_module_info", 00:05:42.156 "accel_get_opc_assignments", 00:05:42.156 "vmd_rescan", 00:05:42.156 "vmd_remove_device", 00:05:42.156 "vmd_enable", 00:05:42.156 "sock_get_default_impl", 00:05:42.156 "sock_set_default_impl", 00:05:42.156 "sock_impl_set_options", 00:05:42.156 "sock_impl_get_options", 00:05:42.156 "iobuf_get_stats", 00:05:42.156 "iobuf_set_options", 00:05:42.156 "keyring_get_keys", 00:05:42.156 "framework_get_pci_devices", 00:05:42.156 "framework_get_config", 00:05:42.156 "framework_get_subsystems", 00:05:42.156 "fsdev_set_opts", 00:05:42.156 "fsdev_get_opts", 00:05:42.156 "trace_get_info", 00:05:42.156 "trace_get_tpoint_group_mask", 00:05:42.156 "trace_disable_tpoint_group", 00:05:42.156 "trace_enable_tpoint_group", 00:05:42.156 "trace_clear_tpoint_mask", 00:05:42.156 "trace_set_tpoint_mask", 00:05:42.156 "notify_get_notifications", 00:05:42.156 "notify_get_types", 00:05:42.156 "spdk_get_version", 00:05:42.156 "rpc_get_methods" 00:05:42.156 ] 00:05:42.156 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.156 11:38:07 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.156 11:38:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.416 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.416 11:38:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58058 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58058 ']' 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58058 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58058 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.416 killing process with pid 58058 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58058' 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58058 00:05:42.416 11:38:07 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58058 00:05:44.954 00:05:44.954 real 0m4.281s 00:05:44.954 user 0m7.687s 00:05:44.954 sys 0m0.626s 00:05:44.954 11:38:10 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.954 11:38:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.954 ************************************ 00:05:44.954 END TEST spdkcli_tcp 00:05:44.954 ************************************ 00:05:44.954 11:38:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.954 11:38:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.954 11:38:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.954 11:38:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.954 ************************************ 00:05:44.954 START TEST dpdk_mem_utility 00:05:44.954 ************************************ 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.954 * Looking for test storage... 00:05:44.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.954 11:38:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.954 --rc genhtml_branch_coverage=1 00:05:44.954 --rc genhtml_function_coverage=1 00:05:44.954 --rc genhtml_legend=1 00:05:44.954 --rc geninfo_all_blocks=1 00:05:44.954 --rc geninfo_unexecuted_blocks=1 00:05:44.954 00:05:44.954 ' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.954 --rc genhtml_branch_coverage=1 00:05:44.954 --rc genhtml_function_coverage=1 00:05:44.954 --rc genhtml_legend=1 00:05:44.954 --rc geninfo_all_blocks=1 00:05:44.954 --rc geninfo_unexecuted_blocks=1 00:05:44.954 00:05:44.954 ' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.954 --rc genhtml_branch_coverage=1 00:05:44.954 --rc genhtml_function_coverage=1 00:05:44.954 --rc genhtml_legend=1 00:05:44.954 --rc geninfo_all_blocks=1 00:05:44.954 --rc geninfo_unexecuted_blocks=1 00:05:44.954 00:05:44.954 ' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.954 --rc genhtml_branch_coverage=1 00:05:44.954 --rc genhtml_function_coverage=1 00:05:44.954 --rc genhtml_legend=1 00:05:44.954 --rc geninfo_all_blocks=1 00:05:44.954 --rc geninfo_unexecuted_blocks=1 00:05:44.954 00:05:44.954 ' 00:05:44.954 11:38:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.954 11:38:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58181 00:05:44.954 11:38:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.954 11:38:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58181 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58181 ']' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.954 11:38:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.214 [2024-11-04 11:38:10.516371] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:45.214 [2024-11-04 11:38:10.516505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58181 ] 00:05:45.214 [2024-11-04 11:38:10.690794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.473 [2024-11-04 11:38:10.804215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.412 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.412 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:46.413 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.413 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.413 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.413 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.413 { 00:05:46.413 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.413 } 00:05:46.413 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.413 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.413 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:46.413 1 heaps totaling size 816.000000 MiB 00:05:46.413 size: 816.000000 MiB heap id: 0 00:05:46.413 end heaps---------- 00:05:46.413 9 mempools totaling size 595.772034 MiB 00:05:46.413 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.413 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.413 size: 92.545471 MiB name: bdev_io_58181 00:05:46.413 size: 50.003479 MiB name: msgpool_58181 00:05:46.413 size: 36.509338 MiB name: fsdev_io_58181 00:05:46.413 size: 21.763794 MiB name: PDU_Pool 00:05:46.413 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.413 size: 4.133484 MiB name: evtpool_58181 00:05:46.413 size: 0.026123 MiB name: Session_Pool 00:05:46.413 end mempools------- 00:05:46.413 6 memzones totaling size 4.142822 MiB 00:05:46.413 size: 1.000366 MiB name: RG_ring_0_58181 00:05:46.413 size: 1.000366 MiB name: RG_ring_1_58181 00:05:46.413 size: 1.000366 MiB name: RG_ring_4_58181 00:05:46.413 size: 1.000366 MiB name: RG_ring_5_58181 00:05:46.413 size: 0.125366 MiB name: RG_ring_2_58181 00:05:46.413 size: 0.015991 MiB name: RG_ring_3_58181 00:05:46.413 end memzones------- 00:05:46.413 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.413 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:46.413 list of free elements. size: 16.789673 MiB 00:05:46.413 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:46.413 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:46.413 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:46.413 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:46.413 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:46.413 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:46.413 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:46.413 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:46.413 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:46.413 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:46.413 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:46.413 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:05:46.413 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:46.413 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:46.413 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:46.413 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:46.413 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:46.413 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:46.413 list of standard malloc elements. size: 199.289429 MiB 00:05:46.413 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:46.413 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:46.413 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:46.413 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:46.413 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:46.413 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:46.413 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:46.413 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:46.413 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:46.413 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:46.413 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:46.413 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:46.413 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:46.413 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:46.414 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:46.414 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:46.414 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:46.414 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:46.415 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:46.415 list of memzone associated elements. size: 599.920898 MiB 00:05:46.415 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:46.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.415 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:46.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.415 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:46.415 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58181_0 00:05:46.415 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:46.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58181_0 00:05:46.415 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:46.415 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58181_0 00:05:46.415 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:46.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.415 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:46.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.415 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:46.415 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58181_0 00:05:46.415 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:46.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58181 00:05:46.415 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:46.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58181 00:05:46.415 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:46.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.415 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:46.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.415 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:46.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.415 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:46.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.415 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:46.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58181 00:05:46.415 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:46.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58181 00:05:46.415 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:46.415 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58181 00:05:46.415 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:46.415 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58181 00:05:46.415 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:46.415 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58181 00:05:46.415 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:46.415 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58181 00:05:46.415 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:46.415 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.415 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:46.415 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.415 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:46.415 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.415 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:46.415 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58181 00:05:46.415 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:46.415 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58181 00:05:46.415 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:46.415 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.415 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:46.415 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.415 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:46.415 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58181 00:05:46.415 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:46.415 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.415 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:46.415 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58181 00:05:46.415 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:46.415 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58181 00:05:46.415 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:46.415 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58181 00:05:46.415 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:46.415 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.415 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.415 11:38:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58181 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58181 ']' 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58181 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58181 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.415 killing process with pid 58181 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58181' 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58181 00:05:46.415 11:38:11 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58181 00:05:49.064 00:05:49.064 real 0m3.997s 00:05:49.064 user 0m3.902s 00:05:49.064 sys 0m0.578s 00:05:49.064 11:38:14 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.064 ************************************ 00:05:49.064 END TEST dpdk_mem_utility 00:05:49.064 ************************************ 00:05:49.064 11:38:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.064 11:38:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.064 11:38:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.064 11:38:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.064 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:05:49.064 ************************************ 00:05:49.064 START TEST event 00:05:49.064 ************************************ 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.064 * Looking for test storage... 00:05:49.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.064 11:38:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.064 11:38:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.064 11:38:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.064 11:38:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.064 11:38:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.064 11:38:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.064 11:38:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.064 11:38:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.064 11:38:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.064 11:38:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.064 11:38:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.064 11:38:14 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.064 11:38:14 event -- scripts/common.sh@345 -- # : 1 00:05:49.064 11:38:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.064 11:38:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.064 11:38:14 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.064 11:38:14 event -- scripts/common.sh@353 -- # local d=1 00:05:49.064 11:38:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.064 11:38:14 event -- scripts/common.sh@355 -- # echo 1 00:05:49.064 11:38:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.064 11:38:14 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.064 11:38:14 event -- scripts/common.sh@353 -- # local d=2 00:05:49.064 11:38:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.064 11:38:14 event -- scripts/common.sh@355 -- # echo 2 00:05:49.064 11:38:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.064 11:38:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.064 11:38:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.064 11:38:14 event -- scripts/common.sh@368 -- # return 0 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.064 --rc genhtml_branch_coverage=1 00:05:49.064 --rc genhtml_function_coverage=1 00:05:49.064 --rc genhtml_legend=1 00:05:49.064 --rc geninfo_all_blocks=1 00:05:49.064 --rc geninfo_unexecuted_blocks=1 00:05:49.064 00:05:49.064 ' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.064 --rc genhtml_branch_coverage=1 00:05:49.064 --rc genhtml_function_coverage=1 00:05:49.064 --rc genhtml_legend=1 00:05:49.064 --rc geninfo_all_blocks=1 00:05:49.064 --rc geninfo_unexecuted_blocks=1 00:05:49.064 00:05:49.064 ' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.064 --rc genhtml_branch_coverage=1 00:05:49.064 --rc genhtml_function_coverage=1 00:05:49.064 --rc genhtml_legend=1 00:05:49.064 --rc geninfo_all_blocks=1 00:05:49.064 --rc geninfo_unexecuted_blocks=1 00:05:49.064 00:05:49.064 ' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.064 --rc genhtml_branch_coverage=1 00:05:49.064 --rc genhtml_function_coverage=1 00:05:49.064 --rc genhtml_legend=1 00:05:49.064 --rc geninfo_all_blocks=1 00:05:49.064 --rc geninfo_unexecuted_blocks=1 00:05:49.064 00:05:49.064 ' 00:05:49.064 11:38:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.064 11:38:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.064 11:38:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:49.064 11:38:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.064 11:38:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.064 ************************************ 00:05:49.064 START TEST event_perf 00:05:49.064 ************************************ 00:05:49.064 11:38:14 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.064 Running I/O for 1 seconds...[2024-11-04 11:38:14.535119] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:49.064 [2024-11-04 11:38:14.535233] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:05:49.324 [2024-11-04 11:38:14.696426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.324 Running I/O for 1 seconds...[2024-11-04 11:38:14.817082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.324 [2024-11-04 11:38:14.817211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.324 [2024-11-04 11:38:14.817377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.324 [2024-11-04 11:38:14.817445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.702 00:05:50.702 lcore 0: 108244 00:05:50.702 lcore 1: 108243 00:05:50.702 lcore 2: 108246 00:05:50.702 lcore 3: 108242 00:05:50.702 done. 00:05:50.702 00:05:50.702 real 0m1.574s 00:05:50.702 user 0m4.331s 00:05:50.702 sys 0m0.121s 00:05:50.702 11:38:16 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.702 11:38:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.702 ************************************ 00:05:50.702 END TEST event_perf 00:05:50.702 ************************************ 00:05:50.702 11:38:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.702 11:38:16 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:50.702 11:38:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.702 11:38:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.702 ************************************ 00:05:50.702 START TEST event_reactor 00:05:50.702 ************************************ 00:05:50.702 11:38:16 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.702 [2024-11-04 11:38:16.171306] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:50.702 [2024-11-04 11:38:16.171751] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58327 ] 00:05:50.962 [2024-11-04 11:38:16.341948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.962 [2024-11-04 11:38:16.455256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.343 test_start 00:05:52.343 oneshot 00:05:52.343 tick 100 00:05:52.343 tick 100 00:05:52.343 tick 250 00:05:52.343 tick 100 00:05:52.343 tick 100 00:05:52.343 tick 250 00:05:52.343 tick 100 00:05:52.343 tick 500 00:05:52.343 tick 100 00:05:52.343 tick 100 00:05:52.343 tick 250 00:05:52.343 tick 100 00:05:52.343 tick 100 00:05:52.343 test_end 00:05:52.343 00:05:52.343 real 0m1.563s 00:05:52.343 user 0m1.360s 00:05:52.343 sys 0m0.093s 00:05:52.343 11:38:17 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.343 11:38:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.343 ************************************ 00:05:52.343 END TEST event_reactor 00:05:52.343 ************************************ 00:05:52.343 11:38:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.343 11:38:17 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:52.343 11:38:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.343 11:38:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.343 ************************************ 00:05:52.343 START TEST event_reactor_perf 00:05:52.343 ************************************ 00:05:52.343 11:38:17 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.343 [2024-11-04 11:38:17.803463] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:52.343 [2024-11-04 11:38:17.803593] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58365 ] 00:05:52.602 [2024-11-04 11:38:17.980763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.602 [2024-11-04 11:38:18.103059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.011 test_start 00:05:54.011 test_end 00:05:54.011 Performance: 370337 events per second 00:05:54.011 00:05:54.011 real 0m1.585s 00:05:54.011 user 0m1.377s 00:05:54.011 sys 0m0.098s 00:05:54.011 11:38:19 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.011 11:38:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.011 ************************************ 00:05:54.011 END TEST event_reactor_perf 00:05:54.011 ************************************ 00:05:54.011 11:38:19 event -- event/event.sh@49 -- # uname -s 00:05:54.011 11:38:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.011 11:38:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.011 11:38:19 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.011 11:38:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.011 11:38:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.011 ************************************ 00:05:54.011 START TEST event_scheduler 00:05:54.011 ************************************ 00:05:54.011 11:38:19 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:54.011 * Looking for test storage... 00:05:54.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.284 11:38:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.284 --rc genhtml_branch_coverage=1 00:05:54.284 --rc genhtml_function_coverage=1 00:05:54.284 --rc genhtml_legend=1 00:05:54.284 --rc geninfo_all_blocks=1 00:05:54.284 --rc geninfo_unexecuted_blocks=1 00:05:54.284 00:05:54.284 ' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.284 --rc genhtml_branch_coverage=1 00:05:54.284 --rc genhtml_function_coverage=1 00:05:54.284 --rc genhtml_legend=1 00:05:54.284 --rc geninfo_all_blocks=1 00:05:54.284 --rc geninfo_unexecuted_blocks=1 00:05:54.284 00:05:54.284 ' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.284 --rc genhtml_branch_coverage=1 00:05:54.284 --rc genhtml_function_coverage=1 00:05:54.284 --rc genhtml_legend=1 00:05:54.284 --rc geninfo_all_blocks=1 00:05:54.284 --rc geninfo_unexecuted_blocks=1 00:05:54.284 00:05:54.284 ' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.284 --rc genhtml_branch_coverage=1 00:05:54.284 --rc genhtml_function_coverage=1 00:05:54.284 --rc genhtml_legend=1 00:05:54.284 --rc geninfo_all_blocks=1 00:05:54.284 --rc geninfo_unexecuted_blocks=1 00:05:54.284 00:05:54.284 ' 00:05:54.284 11:38:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.284 11:38:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58441 00:05:54.284 11:38:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.284 11:38:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.284 11:38:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58441 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58441 ']' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.284 11:38:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.284 [2024-11-04 11:38:19.734175] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:05:54.284 [2024-11-04 11:38:19.734314] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58441 ] 00:05:54.543 [2024-11-04 11:38:19.903103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.543 [2024-11-04 11:38:20.045058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.543 [2024-11-04 11:38:20.045256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.543 [2024-11-04 11:38:20.045387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.543 [2024-11-04 11:38:20.045459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:55.110 11:38:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.110 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.110 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.110 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.110 POWER: Cannot set governor of lcore 0 to performance 00:05:55.110 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.110 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.110 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:55.110 POWER: Cannot set governor of lcore 0 to userspace 00:05:55.110 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:55.110 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:55.110 POWER: Unable to set Power Management Environment for lcore 0 00:05:55.110 [2024-11-04 11:38:20.590253] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:55.110 [2024-11-04 11:38:20.590277] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:55.110 [2024-11-04 11:38:20.590288] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.110 [2024-11-04 11:38:20.590311] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.110 [2024-11-04 11:38:20.590320] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.110 [2024-11-04 11:38:20.590330] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.110 11:38:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.110 11:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.679 [2024-11-04 11:38:20.923353] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.680 11:38:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.680 11:38:20 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.680 11:38:20 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 ************************************ 00:05:55.680 START TEST scheduler_create_thread 00:05:55.680 ************************************ 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 2 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 3 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 4 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 5 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 6 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 7 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 8 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 9 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.680 10 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.680 11:38:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.082 11:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.082 11:38:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.082 11:38:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.082 11:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.082 11:38:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.020 11:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.020 11:38:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.020 11:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.020 11:38:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.590 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.590 11:38:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.590 11:38:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.590 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.590 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.526 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.526 00:05:59.526 real 0m3.885s 00:05:59.526 user 0m0.028s 00:05:59.526 sys 0m0.009s 00:05:59.526 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.526 11:38:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.526 ************************************ 00:05:59.526 END TEST scheduler_create_thread 00:05:59.526 ************************************ 00:05:59.526 11:38:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.526 11:38:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58441 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58441 ']' 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58441 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58441 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:59.526 killing process with pid 58441 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58441' 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58441 00:05:59.526 11:38:24 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58441 00:05:59.785 [2024-11-04 11:38:25.201458] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.164 00:06:01.164 real 0m6.958s 00:06:01.164 user 0m14.333s 00:06:01.164 sys 0m0.557s 00:06:01.164 11:38:26 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.164 11:38:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.164 ************************************ 00:06:01.164 END TEST event_scheduler 00:06:01.164 ************************************ 00:06:01.164 11:38:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.164 11:38:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.164 11:38:26 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.164 11:38:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.164 11:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.164 ************************************ 00:06:01.164 START TEST app_repeat 00:06:01.164 ************************************ 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58558 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.164 Process app_repeat pid: 58558 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58558' 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.164 spdk_app_start Round 0 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.164 11:38:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58558 /var/tmp/spdk-nbd.sock 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58558 ']' 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.164 11:38:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.164 [2024-11-04 11:38:26.504919] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:01.164 [2024-11-04 11:38:26.505033] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58558 ] 00:06:01.164 [2024-11-04 11:38:26.681184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.423 [2024-11-04 11:38:26.800964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.423 [2024-11-04 11:38:26.800999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.991 11:38:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.991 11:38:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:01.991 11:38:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.249 Malloc0 00:06:02.249 11:38:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.507 Malloc1 00:06:02.507 11:38:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.507 11:38:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.766 /dev/nbd0 00:06:02.766 11:38:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.766 11:38:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.766 1+0 records in 00:06:02.766 1+0 records out 00:06:02.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392908 s, 10.4 MB/s 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:02.766 11:38:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:02.766 11:38:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.766 11:38:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.766 11:38:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.025 /dev/nbd1 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.025 1+0 records in 00:06:03.025 1+0 records out 00:06:03.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476847 s, 8.6 MB/s 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.025 11:38:28 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.025 11:38:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.283 { 00:06:03.283 "nbd_device": "/dev/nbd0", 00:06:03.283 "bdev_name": "Malloc0" 00:06:03.283 }, 00:06:03.283 { 00:06:03.283 "nbd_device": "/dev/nbd1", 00:06:03.283 "bdev_name": "Malloc1" 00:06:03.283 } 00:06:03.283 ]' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.283 { 00:06:03.283 "nbd_device": "/dev/nbd0", 00:06:03.283 "bdev_name": "Malloc0" 00:06:03.283 }, 00:06:03.283 { 00:06:03.283 "nbd_device": "/dev/nbd1", 00:06:03.283 "bdev_name": "Malloc1" 00:06:03.283 } 00:06:03.283 ]' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.283 /dev/nbd1' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.283 /dev/nbd1' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.283 256+0 records in 00:06:03.283 256+0 records out 00:06:03.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146432 s, 71.6 MB/s 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.283 256+0 records in 00:06:03.283 256+0 records out 00:06:03.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184247 s, 56.9 MB/s 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.283 11:38:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.542 256+0 records in 00:06:03.542 256+0 records out 00:06:03.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289462 s, 36.2 MB/s 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.542 11:38:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.801 11:38:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.060 11:38:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.319 11:38:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.319 11:38:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.577 11:38:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.959 [2024-11-04 11:38:31.197574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.959 [2024-11-04 11:38:31.310486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.959 [2024-11-04 11:38:31.310492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.219 [2024-11-04 11:38:31.500648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.219 [2024-11-04 11:38:31.500732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.598 11:38:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.598 spdk_app_start Round 1 00:06:07.598 11:38:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.598 11:38:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58558 /var/tmp/spdk-nbd.sock 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58558 ']' 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.598 11:38:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.857 11:38:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.857 11:38:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:07.857 11:38:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.117 Malloc0 00:06:08.117 11:38:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.694 Malloc1 00:06:08.694 11:38:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.694 11:38:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.694 /dev/nbd0 00:06:08.694 11:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.694 11:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.694 11:38:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.967 1+0 records in 00:06:08.967 1+0 records out 00:06:08.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549092 s, 7.5 MB/s 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:08.967 11:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.967 11:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.967 11:38:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.967 /dev/nbd1 00:06:08.967 11:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.967 11:38:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.967 1+0 records in 00:06:08.967 1+0 records out 00:06:08.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262948 s, 15.6 MB/s 00:06:08.967 11:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.227 11:38:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.227 11:38:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.227 11:38:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.227 11:38:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.227 11:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.227 { 00:06:09.227 "nbd_device": "/dev/nbd0", 00:06:09.227 "bdev_name": "Malloc0" 00:06:09.227 }, 00:06:09.227 { 00:06:09.227 "nbd_device": "/dev/nbd1", 00:06:09.227 "bdev_name": "Malloc1" 00:06:09.227 } 00:06:09.227 ]' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.486 { 00:06:09.486 "nbd_device": "/dev/nbd0", 00:06:09.486 "bdev_name": "Malloc0" 00:06:09.486 }, 00:06:09.486 { 00:06:09.486 "nbd_device": "/dev/nbd1", 00:06:09.486 "bdev_name": "Malloc1" 00:06:09.486 } 00:06:09.486 ]' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.486 /dev/nbd1' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.486 /dev/nbd1' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.486 256+0 records in 00:06:09.486 256+0 records out 00:06:09.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616134 s, 170 MB/s 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.486 256+0 records in 00:06:09.486 256+0 records out 00:06:09.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186414 s, 56.2 MB/s 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.486 256+0 records in 00:06:09.486 256+0 records out 00:06:09.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288278 s, 36.4 MB/s 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.486 11:38:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.745 11:38:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.005 11:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.263 11:38:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.263 11:38:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.522 11:38:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.902 [2024-11-04 11:38:37.169369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.902 [2024-11-04 11:38:37.287777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.902 [2024-11-04 11:38:37.287816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.172 [2024-11-04 11:38:37.480798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.172 [2024-11-04 11:38:37.481015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.570 11:38:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.570 11:38:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:13.570 spdk_app_start Round 2 00:06:13.570 11:38:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58558 /var/tmp/spdk-nbd.sock 00:06:13.570 11:38:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58558 ']' 00:06:13.570 11:38:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.571 11:38:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.571 11:38:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.571 11:38:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.571 11:38:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.829 11:38:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:13.829 11:38:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:13.829 11:38:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.089 Malloc0 00:06:14.089 11:38:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.349 Malloc1 00:06:14.349 11:38:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.349 11:38:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.607 /dev/nbd0 00:06:14.607 11:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.607 11:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.607 1+0 records in 00:06:14.607 1+0 records out 00:06:14.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589597 s, 6.9 MB/s 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.607 11:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:14.607 11:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.607 11:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.607 11:38:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.866 /dev/nbd1 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.866 1+0 records in 00:06:14.866 1+0 records out 00:06:14.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320412 s, 12.8 MB/s 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:14.866 11:38:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.866 11:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.126 { 00:06:15.126 "nbd_device": "/dev/nbd0", 00:06:15.126 "bdev_name": "Malloc0" 00:06:15.126 }, 00:06:15.126 { 00:06:15.126 "nbd_device": "/dev/nbd1", 00:06:15.126 "bdev_name": "Malloc1" 00:06:15.126 } 00:06:15.126 ]' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.126 { 00:06:15.126 "nbd_device": "/dev/nbd0", 00:06:15.126 "bdev_name": "Malloc0" 00:06:15.126 }, 00:06:15.126 { 00:06:15.126 "nbd_device": "/dev/nbd1", 00:06:15.126 "bdev_name": "Malloc1" 00:06:15.126 } 00:06:15.126 ]' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.126 /dev/nbd1' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.126 /dev/nbd1' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.126 256+0 records in 00:06:15.126 256+0 records out 00:06:15.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120015 s, 87.4 MB/s 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.126 256+0 records in 00:06:15.126 256+0 records out 00:06:15.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255902 s, 41.0 MB/s 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.126 11:38:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.386 256+0 records in 00:06:15.386 256+0 records out 00:06:15.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312943 s, 33.5 MB/s 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.386 11:38:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.660 11:38:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.660 11:38:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.660 11:38:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.660 11:38:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.660 11:38:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.660 11:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.931 11:38:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.931 11:38:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.500 11:38:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.879 [2024-11-04 11:38:42.996371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.880 [2024-11-04 11:38:43.108285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.880 [2024-11-04 11:38:43.108286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.880 [2024-11-04 11:38:43.304875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.880 [2024-11-04 11:38:43.304971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.836 11:38:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58558 /var/tmp/spdk-nbd.sock 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58558 ']' 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.836 11:38:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:19.836 11:38:45 event.app_repeat -- event/event.sh@39 -- # killprocess 58558 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58558 ']' 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58558 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58558 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58558' 00:06:19.836 killing process with pid 58558 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58558 00:06:19.836 11:38:45 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58558 00:06:20.774 spdk_app_start is called in Round 0. 00:06:20.774 Shutdown signal received, stop current app iteration 00:06:20.774 Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 reinitialization... 00:06:20.774 spdk_app_start is called in Round 1. 00:06:20.774 Shutdown signal received, stop current app iteration 00:06:20.774 Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 reinitialization... 00:06:20.774 spdk_app_start is called in Round 2. 00:06:20.774 Shutdown signal received, stop current app iteration 00:06:20.774 Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 reinitialization... 00:06:20.774 spdk_app_start is called in Round 3. 00:06:20.774 Shutdown signal received, stop current app iteration 00:06:20.774 11:38:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:20.774 11:38:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:20.774 00:06:20.774 real 0m19.735s 00:06:20.774 user 0m42.512s 00:06:20.774 sys 0m2.790s 00:06:20.774 11:38:46 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.774 11:38:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.774 ************************************ 00:06:20.774 END TEST app_repeat 00:06:20.774 ************************************ 00:06:20.774 11:38:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.774 11:38:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:20.774 11:38:46 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.774 11:38:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.774 11:38:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.774 ************************************ 00:06:20.774 START TEST cpu_locks 00:06:20.774 ************************************ 00:06:20.774 11:38:46 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.033 * Looking for test storage... 00:06:21.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.033 11:38:46 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.033 11:38:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.033 11:38:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.033 11:38:46 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:21.033 11:38:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.034 11:38:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.034 --rc genhtml_branch_coverage=1 00:06:21.034 --rc genhtml_function_coverage=1 00:06:21.034 --rc genhtml_legend=1 00:06:21.034 --rc geninfo_all_blocks=1 00:06:21.034 --rc geninfo_unexecuted_blocks=1 00:06:21.034 00:06:21.034 ' 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.034 --rc genhtml_branch_coverage=1 00:06:21.034 --rc genhtml_function_coverage=1 00:06:21.034 --rc genhtml_legend=1 00:06:21.034 --rc geninfo_all_blocks=1 00:06:21.034 --rc geninfo_unexecuted_blocks=1 00:06:21.034 00:06:21.034 ' 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.034 --rc genhtml_branch_coverage=1 00:06:21.034 --rc genhtml_function_coverage=1 00:06:21.034 --rc genhtml_legend=1 00:06:21.034 --rc geninfo_all_blocks=1 00:06:21.034 --rc geninfo_unexecuted_blocks=1 00:06:21.034 00:06:21.034 ' 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.034 --rc genhtml_branch_coverage=1 00:06:21.034 --rc genhtml_function_coverage=1 00:06:21.034 --rc genhtml_legend=1 00:06:21.034 --rc geninfo_all_blocks=1 00:06:21.034 --rc geninfo_unexecuted_blocks=1 00:06:21.034 00:06:21.034 ' 00:06:21.034 11:38:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.034 11:38:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.034 11:38:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.034 11:38:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.034 11:38:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.034 ************************************ 00:06:21.034 START TEST default_locks 00:06:21.034 ************************************ 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59013 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59013 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59013 ']' 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.034 11:38:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.293 [2024-11-04 11:38:46.568658] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:21.293 [2024-11-04 11:38:46.568866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59013 ] 00:06:21.293 [2024-11-04 11:38:46.728672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.553 [2024-11-04 11:38:46.847964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59013 ']' 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59013 00:06:22.492 killing process with pid 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59013' 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59013 00:06:22.492 11:38:47 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59013 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59013 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59013 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59013 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59013 ']' 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59013) - No such process 00:06:25.030 ERROR: process (pid: 59013) is no longer running 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.030 00:06:25.030 real 0m3.950s 00:06:25.030 user 0m3.904s 00:06:25.030 sys 0m0.599s 00:06:25.030 ************************************ 00:06:25.030 END TEST default_locks 00:06:25.030 ************************************ 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.030 11:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.030 11:38:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.030 11:38:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.030 11:38:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.030 11:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.030 ************************************ 00:06:25.030 START TEST default_locks_via_rpc 00:06:25.030 ************************************ 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59088 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59088 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59088 ']' 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.030 11:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.289 [2024-11-04 11:38:50.580467] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:25.289 [2024-11-04 11:38:50.580667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59088 ] 00:06:25.289 [2024-11-04 11:38:50.752042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.547 [2024-11-04 11:38:50.870733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59088 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.529 11:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59088 ']' 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.788 killing process with pid 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59088' 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59088 00:06:26.788 11:38:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59088 00:06:29.322 00:06:29.322 real 0m4.054s 00:06:29.322 user 0m3.986s 00:06:29.323 sys 0m0.614s 00:06:29.323 11:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.323 11:38:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.323 ************************************ 00:06:29.323 END TEST default_locks_via_rpc 00:06:29.323 ************************************ 00:06:29.323 11:38:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:29.323 11:38:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.323 11:38:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.323 11:38:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.323 ************************************ 00:06:29.323 START TEST non_locking_app_on_locked_coremask 00:06:29.323 ************************************ 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59162 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59162 /var/tmp/spdk.sock 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59162 ']' 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.323 11:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.323 [2024-11-04 11:38:54.701860] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:29.323 [2024-11-04 11:38:54.702076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59162 ] 00:06:29.582 [2024-11-04 11:38:54.862295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.582 [2024-11-04 11:38:54.980008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59180 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59180 /var/tmp/spdk2.sock 00:06:30.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59180 ']' 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.518 11:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.518 [2024-11-04 11:38:55.982647] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:30.518 [2024-11-04 11:38:55.982784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59180 ] 00:06:30.775 [2024-11-04 11:38:56.158648] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.775 [2024-11-04 11:38:56.158737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.034 [2024-11-04 11:38:56.392970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59162 ']' 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59162 00:06:33.569 killing process with pid 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59162' 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59162 00:06:33.569 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59162 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59180 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59180 ']' 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59180 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59180 00:06:38.848 killing process with pid 59180 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59180' 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59180 00:06:38.848 11:39:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59180 00:06:41.431 ************************************ 00:06:41.431 END TEST non_locking_app_on_locked_coremask 00:06:41.431 ************************************ 00:06:41.431 00:06:41.431 real 0m12.337s 00:06:41.431 user 0m12.595s 00:06:41.431 sys 0m1.164s 00:06:41.431 11:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.431 11:39:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 11:39:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.690 11:39:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.690 11:39:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.690 11:39:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 ************************************ 00:06:41.690 START TEST locking_app_on_unlocked_coremask 00:06:41.690 ************************************ 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59340 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59340 /var/tmp/spdk.sock 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59340 ']' 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.690 11:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.690 [2024-11-04 11:39:07.100802] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:41.690 [2024-11-04 11:39:07.101021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:06:41.949 [2024-11-04 11:39:07.281289] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.949 [2024-11-04 11:39:07.281487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.949 [2024-11-04 11:39:07.403383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59356 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59356 /var/tmp/spdk2.sock 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59356 ']' 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.886 11:39:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.145 [2024-11-04 11:39:08.482993] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:43.145 [2024-11-04 11:39:08.483301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:06:43.403 [2024-11-04 11:39:08.688905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.661 [2024-11-04 11:39:08.975858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59356 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59356 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59340 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59340 ']' 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59340 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59340 00:06:46.223 killing process with pid 59340 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59340' 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59340 00:06:46.223 11:39:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59340 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59356 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59356 ']' 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59356 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59356 00:06:51.498 killing process with pid 59356 00:06:51.498 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:51.499 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:51.499 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59356' 00:06:51.499 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59356 00:06:51.499 11:39:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59356 00:06:54.034 00:06:54.034 real 0m12.044s 00:06:54.034 user 0m12.449s 00:06:54.034 sys 0m1.347s 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.034 ************************************ 00:06:54.034 END TEST locking_app_on_unlocked_coremask 00:06:54.034 ************************************ 00:06:54.034 11:39:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:54.034 11:39:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.034 11:39:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.034 11:39:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.034 ************************************ 00:06:54.034 START TEST locking_app_on_locked_coremask 00:06:54.034 ************************************ 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59512 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59512 /var/tmp/spdk.sock 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59512 ']' 00:06:54.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.034 11:39:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.034 [2024-11-04 11:39:19.214358] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:54.034 [2024-11-04 11:39:19.214542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:06:54.034 [2024-11-04 11:39:19.398069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.034 [2024-11-04 11:39:19.519873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59528 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59528 /var/tmp/spdk2.sock 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59528 /var/tmp/spdk2.sock 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59528 /var/tmp/spdk2.sock 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59528 ']' 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.971 11:39:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.232 [2024-11-04 11:39:20.521823] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:55.232 [2024-11-04 11:39:20.522025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:06:55.232 [2024-11-04 11:39:20.694762] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59512 has claimed it. 00:06:55.232 [2024-11-04 11:39:20.694866] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.801 ERROR: process (pid: 59528) is no longer running 00:06:55.801 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59528) - No such process 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59512 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59512 00:06:55.801 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59512 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59512 ']' 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59512 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59512 00:06:56.061 killing process with pid 59512 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59512' 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59512 00:06:56.061 11:39:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59512 00:06:59.352 ************************************ 00:06:59.352 END TEST locking_app_on_locked_coremask 00:06:59.352 ************************************ 00:06:59.352 00:06:59.352 real 0m5.053s 00:06:59.352 user 0m5.194s 00:06:59.352 sys 0m0.809s 00:06:59.352 11:39:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.352 11:39:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.352 11:39:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:59.352 11:39:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.352 11:39:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.352 11:39:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.352 ************************************ 00:06:59.352 START TEST locking_overlapped_coremask 00:06:59.352 ************************************ 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59598 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59598 /var/tmp/spdk.sock 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59598 ']' 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.352 11:39:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.352 [2024-11-04 11:39:24.318308] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:06:59.352 [2024-11-04 11:39:24.318437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ] 00:06:59.352 [2024-11-04 11:39:24.492851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.352 [2024-11-04 11:39:24.639699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.352 [2024-11-04 11:39:24.639800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.352 [2024-11-04 11:39:24.639832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59621 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59621 /var/tmp/spdk2.sock 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59621 /var/tmp/spdk2.sock 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59621 /var/tmp/spdk2.sock 00:07:00.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59621 ']' 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.290 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.291 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.291 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.291 11:39:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.291 [2024-11-04 11:39:25.732677] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:00.291 [2024-11-04 11:39:25.732847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59621 ] 00:07:00.549 [2024-11-04 11:39:25.938798] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59598 has claimed it. 00:07:00.549 [2024-11-04 11:39:25.938884] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59621) - No such process 00:07:01.116 ERROR: process (pid: 59621) is no longer running 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59598 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59598 ']' 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59598 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59598 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.116 killing process with pid 59598 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59598' 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59598 00:07:01.116 11:39:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59598 00:07:04.405 00:07:04.405 real 0m5.113s 00:07:04.405 user 0m14.073s 00:07:04.405 sys 0m0.631s 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.405 ************************************ 00:07:04.405 END TEST locking_overlapped_coremask 00:07:04.405 ************************************ 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.405 11:39:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:04.405 11:39:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.405 11:39:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.405 11:39:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.405 ************************************ 00:07:04.405 START TEST locking_overlapped_coremask_via_rpc 00:07:04.405 ************************************ 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59696 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59696 /var/tmp/spdk.sock 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59696 ']' 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:04.405 11:39:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.405 [2024-11-04 11:39:29.503935] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:04.405 [2024-11-04 11:39:29.504171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:07:04.406 [2024-11-04 11:39:29.695857] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:04.406 [2024-11-04 11:39:29.696012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.406 [2024-11-04 11:39:29.827465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.406 [2024-11-04 11:39:29.827501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.406 [2024-11-04 11:39:29.827543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59714 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59714 /var/tmp/spdk2.sock 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59714 ']' 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.343 11:39:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.658 [2024-11-04 11:39:30.929721] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:05.658 [2024-11-04 11:39:30.929986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:07:05.658 [2024-11-04 11:39:31.110352] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.658 [2024-11-04 11:39:31.110445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.917 [2024-11-04 11:39:31.377576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.917 [2024-11-04 11:39:31.381488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.917 [2024-11-04 11:39:31.381494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.444 [2024-11-04 11:39:33.681669] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59696 has claimed it. 00:07:08.444 request: 00:07:08.444 { 00:07:08.444 "method": "framework_enable_cpumask_locks", 00:07:08.444 "req_id": 1 00:07:08.444 } 00:07:08.444 Got JSON-RPC error response 00:07:08.444 response: 00:07:08.444 { 00:07:08.444 "code": -32603, 00:07:08.444 "message": "Failed to claim CPU core: 2" 00:07:08.444 } 00:07:08.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59696 /var/tmp/spdk.sock 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59696 ']' 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.444 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59714 /var/tmp/spdk2.sock 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59714 ']' 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.702 11:39:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.960 ************************************ 00:07:08.960 END TEST locking_overlapped_coremask_via_rpc 00:07:08.960 ************************************ 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.960 00:07:08.960 real 0m4.848s 00:07:08.960 user 0m1.702s 00:07:08.960 sys 0m0.206s 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.960 11:39:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.960 11:39:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:08.960 11:39:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59696 ]] 00:07:08.960 11:39:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59696 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59696 ']' 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59696 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59696 00:07:08.960 killing process with pid 59696 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59696' 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59696 00:07:08.960 11:39:34 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59696 00:07:11.494 11:39:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59714 ]] 00:07:11.494 11:39:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59714 00:07:11.494 11:39:36 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59714 ']' 00:07:11.494 11:39:36 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59714 00:07:11.494 11:39:36 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:11.494 11:39:36 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.494 11:39:36 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59714 00:07:11.752 killing process with pid 59714 00:07:11.752 11:39:37 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:11.752 11:39:37 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:11.753 11:39:37 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59714' 00:07:11.753 11:39:37 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59714 00:07:11.753 11:39:37 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59714 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59696 ]] 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59696 00:07:14.283 Process with pid 59696 is not found 00:07:14.283 Process with pid 59714 is not found 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59696 ']' 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59696 00:07:14.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59696) - No such process 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59696 is not found' 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59714 ]] 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59714 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59714 ']' 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59714 00:07:14.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59714) - No such process 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59714 is not found' 00:07:14.283 11:39:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:14.283 ************************************ 00:07:14.283 END TEST cpu_locks 00:07:14.283 ************************************ 00:07:14.283 00:07:14.283 real 0m53.310s 00:07:14.283 user 1m33.092s 00:07:14.283 sys 0m6.642s 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.283 11:39:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.283 ************************************ 00:07:14.283 END TEST event 00:07:14.283 ************************************ 00:07:14.283 00:07:14.283 real 1m25.351s 00:07:14.283 user 2m37.256s 00:07:14.283 sys 0m10.690s 00:07:14.283 11:39:39 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.283 11:39:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:14.283 11:39:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.283 11:39:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:14.283 11:39:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.283 11:39:39 -- common/autotest_common.sh@10 -- # set +x 00:07:14.283 ************************************ 00:07:14.283 START TEST thread 00:07:14.283 ************************************ 00:07:14.283 11:39:39 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:14.283 * Looking for test storage... 00:07:14.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:14.283 11:39:39 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.283 11:39:39 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.283 11:39:39 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.543 11:39:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.543 11:39:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.543 11:39:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.543 11:39:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.543 11:39:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.543 11:39:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.543 11:39:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.543 11:39:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.543 11:39:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.543 11:39:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.543 11:39:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.543 11:39:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:14.543 11:39:39 thread -- scripts/common.sh@345 -- # : 1 00:07:14.543 11:39:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.543 11:39:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.543 11:39:39 thread -- scripts/common.sh@365 -- # decimal 1 00:07:14.543 11:39:39 thread -- scripts/common.sh@353 -- # local d=1 00:07:14.543 11:39:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.543 11:39:39 thread -- scripts/common.sh@355 -- # echo 1 00:07:14.543 11:39:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.543 11:39:39 thread -- scripts/common.sh@366 -- # decimal 2 00:07:14.543 11:39:39 thread -- scripts/common.sh@353 -- # local d=2 00:07:14.543 11:39:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.543 11:39:39 thread -- scripts/common.sh@355 -- # echo 2 00:07:14.543 11:39:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.543 11:39:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.543 11:39:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.543 11:39:39 thread -- scripts/common.sh@368 -- # return 0 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.543 --rc genhtml_branch_coverage=1 00:07:14.543 --rc genhtml_function_coverage=1 00:07:14.543 --rc genhtml_legend=1 00:07:14.543 --rc geninfo_all_blocks=1 00:07:14.543 --rc geninfo_unexecuted_blocks=1 00:07:14.543 00:07:14.543 ' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.543 --rc genhtml_branch_coverage=1 00:07:14.543 --rc genhtml_function_coverage=1 00:07:14.543 --rc genhtml_legend=1 00:07:14.543 --rc geninfo_all_blocks=1 00:07:14.543 --rc geninfo_unexecuted_blocks=1 00:07:14.543 00:07:14.543 ' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.543 --rc genhtml_branch_coverage=1 00:07:14.543 --rc genhtml_function_coverage=1 00:07:14.543 --rc genhtml_legend=1 00:07:14.543 --rc geninfo_all_blocks=1 00:07:14.543 --rc geninfo_unexecuted_blocks=1 00:07:14.543 00:07:14.543 ' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.543 --rc genhtml_branch_coverage=1 00:07:14.543 --rc genhtml_function_coverage=1 00:07:14.543 --rc genhtml_legend=1 00:07:14.543 --rc geninfo_all_blocks=1 00:07:14.543 --rc geninfo_unexecuted_blocks=1 00:07:14.543 00:07:14.543 ' 00:07:14.543 11:39:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.543 11:39:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.543 ************************************ 00:07:14.543 START TEST thread_poller_perf 00:07:14.543 ************************************ 00:07:14.543 11:39:39 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:14.543 [2024-11-04 11:39:39.959894] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:14.543 [2024-11-04 11:39:39.960907] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59920 ] 00:07:14.802 [2024-11-04 11:39:40.145044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.802 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:14.802 [2024-11-04 11:39:40.267167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.182 [2024-11-04T11:39:41.704Z] ====================================== 00:07:16.182 [2024-11-04T11:39:41.704Z] busy:2299475718 (cyc) 00:07:16.182 [2024-11-04T11:39:41.704Z] total_run_count: 391000 00:07:16.182 [2024-11-04T11:39:41.704Z] tsc_hz: 2290000000 (cyc) 00:07:16.182 [2024-11-04T11:39:41.704Z] ====================================== 00:07:16.182 [2024-11-04T11:39:41.704Z] poller_cost: 5881 (cyc), 2568 (nsec) 00:07:16.182 00:07:16.182 real 0m1.593s 00:07:16.182 user 0m1.374s 00:07:16.182 sys 0m0.111s 00:07:16.182 11:39:41 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.182 11:39:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.182 ************************************ 00:07:16.182 END TEST thread_poller_perf 00:07:16.182 ************************************ 00:07:16.182 11:39:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.182 11:39:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:16.182 11:39:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.182 11:39:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.182 ************************************ 00:07:16.182 START TEST thread_poller_perf 00:07:16.182 ************************************ 00:07:16.182 11:39:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.182 [2024-11-04 11:39:41.614523] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:16.182 [2024-11-04 11:39:41.614714] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:07:16.440 [2024-11-04 11:39:41.785426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.440 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.440 [2024-11-04 11:39:41.903596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.816 [2024-11-04T11:39:43.338Z] ====================================== 00:07:17.816 [2024-11-04T11:39:43.338Z] busy:2293953652 (cyc) 00:07:17.816 [2024-11-04T11:39:43.338Z] total_run_count: 4228000 00:07:17.816 [2024-11-04T11:39:43.338Z] tsc_hz: 2290000000 (cyc) 00:07:17.816 [2024-11-04T11:39:43.338Z] ====================================== 00:07:17.816 [2024-11-04T11:39:43.338Z] poller_cost: 542 (cyc), 236 (nsec) 00:07:17.816 ************************************ 00:07:17.816 END TEST thread_poller_perf 00:07:17.816 ************************************ 00:07:17.816 00:07:17.816 real 0m1.600s 00:07:17.816 user 0m1.402s 00:07:17.816 sys 0m0.089s 00:07:17.816 11:39:43 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.816 11:39:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.816 11:39:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:17.816 ************************************ 00:07:17.816 END TEST thread 00:07:17.816 ************************************ 00:07:17.816 00:07:17.816 real 0m3.557s 00:07:17.816 user 0m2.959s 00:07:17.816 sys 0m0.395s 00:07:17.816 11:39:43 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.816 11:39:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.816 11:39:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:17.816 11:39:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:17.816 11:39:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.816 11:39:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.816 11:39:43 -- common/autotest_common.sh@10 -- # set +x 00:07:17.816 ************************************ 00:07:17.816 START TEST app_cmdline 00:07:17.816 ************************************ 00:07:17.816 11:39:43 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:18.075 * Looking for test storage... 00:07:18.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:18.075 11:39:43 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.075 11:39:43 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.075 11:39:43 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.075 11:39:43 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.075 11:39:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.075 11:39:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.075 11:39:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.075 11:39:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.076 11:39:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.076 --rc genhtml_branch_coverage=1 00:07:18.076 --rc genhtml_function_coverage=1 00:07:18.076 --rc genhtml_legend=1 00:07:18.076 --rc geninfo_all_blocks=1 00:07:18.076 --rc geninfo_unexecuted_blocks=1 00:07:18.076 00:07:18.076 ' 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.076 --rc genhtml_branch_coverage=1 00:07:18.076 --rc genhtml_function_coverage=1 00:07:18.076 --rc genhtml_legend=1 00:07:18.076 --rc geninfo_all_blocks=1 00:07:18.076 --rc geninfo_unexecuted_blocks=1 00:07:18.076 00:07:18.076 ' 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.076 --rc genhtml_branch_coverage=1 00:07:18.076 --rc genhtml_function_coverage=1 00:07:18.076 --rc genhtml_legend=1 00:07:18.076 --rc geninfo_all_blocks=1 00:07:18.076 --rc geninfo_unexecuted_blocks=1 00:07:18.076 00:07:18.076 ' 00:07:18.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.076 --rc genhtml_branch_coverage=1 00:07:18.076 --rc genhtml_function_coverage=1 00:07:18.076 --rc genhtml_legend=1 00:07:18.076 --rc geninfo_all_blocks=1 00:07:18.076 --rc geninfo_unexecuted_blocks=1 00:07:18.076 00:07:18.076 ' 00:07:18.076 11:39:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:18.076 11:39:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60040 00:07:18.076 11:39:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:18.076 11:39:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60040 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60040 ']' 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.076 11:39:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.335 [2024-11-04 11:39:43.611751] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:18.335 [2024-11-04 11:39:43.611976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:07:18.335 [2024-11-04 11:39:43.789465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.593 [2024-11-04 11:39:43.926148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.534 11:39:44 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.534 11:39:44 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:19.534 11:39:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:19.534 { 00:07:19.534 "version": "SPDK v25.01-pre git sha1 3edf9f121", 00:07:19.534 "fields": { 00:07:19.534 "major": 25, 00:07:19.534 "minor": 1, 00:07:19.534 "patch": 0, 00:07:19.534 "suffix": "-pre", 00:07:19.534 "commit": "3edf9f121" 00:07:19.534 } 00:07:19.534 } 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:19.534 11:39:45 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.534 11:39:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:19.534 11:39:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.534 11:39:45 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.792 11:39:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:19.792 11:39:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:19.792 11:39:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.792 request: 00:07:19.792 { 00:07:19.792 "method": "env_dpdk_get_mem_stats", 00:07:19.792 "req_id": 1 00:07:19.792 } 00:07:19.792 Got JSON-RPC error response 00:07:19.792 response: 00:07:19.792 { 00:07:19.792 "code": -32601, 00:07:19.792 "message": "Method not found" 00:07:19.792 } 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.792 11:39:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60040 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60040 ']' 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60040 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.792 11:39:45 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60040 00:07:20.050 killing process with pid 60040 00:07:20.050 11:39:45 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:20.050 11:39:45 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:20.050 11:39:45 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60040' 00:07:20.050 11:39:45 app_cmdline -- common/autotest_common.sh@971 -- # kill 60040 00:07:20.050 11:39:45 app_cmdline -- common/autotest_common.sh@976 -- # wait 60040 00:07:22.582 00:07:22.582 real 0m4.635s 00:07:22.582 user 0m4.877s 00:07:22.582 sys 0m0.611s 00:07:22.582 11:39:47 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.582 11:39:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.582 ************************************ 00:07:22.582 END TEST app_cmdline 00:07:22.582 ************************************ 00:07:22.582 11:39:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:22.582 11:39:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.582 11:39:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.582 11:39:47 -- common/autotest_common.sh@10 -- # set +x 00:07:22.582 ************************************ 00:07:22.582 START TEST version 00:07:22.582 ************************************ 00:07:22.582 11:39:47 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:22.582 * Looking for test storage... 00:07:22.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.839 11:39:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.839 11:39:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.839 11:39:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.839 11:39:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.839 11:39:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.839 11:39:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.839 11:39:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.839 11:39:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.839 11:39:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.839 11:39:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.839 11:39:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.839 11:39:48 version -- scripts/common.sh@344 -- # case "$op" in 00:07:22.839 11:39:48 version -- scripts/common.sh@345 -- # : 1 00:07:22.839 11:39:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.839 11:39:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.839 11:39:48 version -- scripts/common.sh@365 -- # decimal 1 00:07:22.839 11:39:48 version -- scripts/common.sh@353 -- # local d=1 00:07:22.839 11:39:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.839 11:39:48 version -- scripts/common.sh@355 -- # echo 1 00:07:22.839 11:39:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.839 11:39:48 version -- scripts/common.sh@366 -- # decimal 2 00:07:22.839 11:39:48 version -- scripts/common.sh@353 -- # local d=2 00:07:22.839 11:39:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.839 11:39:48 version -- scripts/common.sh@355 -- # echo 2 00:07:22.839 11:39:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.839 11:39:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.839 11:39:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.839 11:39:48 version -- scripts/common.sh@368 -- # return 0 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.839 11:39:48 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.839 --rc genhtml_branch_coverage=1 00:07:22.839 --rc genhtml_function_coverage=1 00:07:22.839 --rc genhtml_legend=1 00:07:22.839 --rc geninfo_all_blocks=1 00:07:22.839 --rc geninfo_unexecuted_blocks=1 00:07:22.839 00:07:22.840 ' 00:07:22.840 11:39:48 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.840 --rc genhtml_branch_coverage=1 00:07:22.840 --rc genhtml_function_coverage=1 00:07:22.840 --rc genhtml_legend=1 00:07:22.840 --rc geninfo_all_blocks=1 00:07:22.840 --rc geninfo_unexecuted_blocks=1 00:07:22.840 00:07:22.840 ' 00:07:22.840 11:39:48 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.840 --rc genhtml_branch_coverage=1 00:07:22.840 --rc genhtml_function_coverage=1 00:07:22.840 --rc genhtml_legend=1 00:07:22.840 --rc geninfo_all_blocks=1 00:07:22.840 --rc geninfo_unexecuted_blocks=1 00:07:22.840 00:07:22.840 ' 00:07:22.840 11:39:48 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.840 --rc genhtml_branch_coverage=1 00:07:22.840 --rc genhtml_function_coverage=1 00:07:22.840 --rc genhtml_legend=1 00:07:22.840 --rc geninfo_all_blocks=1 00:07:22.840 --rc geninfo_unexecuted_blocks=1 00:07:22.840 00:07:22.840 ' 00:07:22.840 11:39:48 version -- app/version.sh@17 -- # get_header_version major 00:07:22.840 11:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.840 11:39:48 version -- app/version.sh@17 -- # major=25 00:07:22.840 11:39:48 version -- app/version.sh@18 -- # get_header_version minor 00:07:22.840 11:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.840 11:39:48 version -- app/version.sh@18 -- # minor=1 00:07:22.840 11:39:48 version -- app/version.sh@19 -- # get_header_version patch 00:07:22.840 11:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.840 11:39:48 version -- app/version.sh@19 -- # patch=0 00:07:22.840 11:39:48 version -- app/version.sh@20 -- # get_header_version suffix 00:07:22.840 11:39:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # cut -f2 00:07:22.840 11:39:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.840 11:39:48 version -- app/version.sh@20 -- # suffix=-pre 00:07:22.840 11:39:48 version -- app/version.sh@22 -- # version=25.1 00:07:22.840 11:39:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:22.840 11:39:48 version -- app/version.sh@28 -- # version=25.1rc0 00:07:22.840 11:39:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:22.840 11:39:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:22.840 11:39:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:22.840 11:39:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:22.840 00:07:22.840 real 0m0.333s 00:07:22.840 user 0m0.212s 00:07:22.840 sys 0m0.174s 00:07:22.840 11:39:48 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.840 11:39:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:22.840 ************************************ 00:07:22.840 END TEST version 00:07:22.840 ************************************ 00:07:23.098 11:39:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:23.098 11:39:48 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:23.098 11:39:48 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:23.098 11:39:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.098 11:39:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.098 11:39:48 -- common/autotest_common.sh@10 -- # set +x 00:07:23.098 ************************************ 00:07:23.098 START TEST bdev_raid 00:07:23.098 ************************************ 00:07:23.098 11:39:48 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:23.098 * Looking for test storage... 00:07:23.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:23.098 11:39:48 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:23.098 11:39:48 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:23.098 11:39:48 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:07:23.098 11:39:48 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:23.098 11:39:48 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.357 11:39:48 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.357 --rc genhtml_branch_coverage=1 00:07:23.357 --rc genhtml_function_coverage=1 00:07:23.357 --rc genhtml_legend=1 00:07:23.357 --rc geninfo_all_blocks=1 00:07:23.357 --rc geninfo_unexecuted_blocks=1 00:07:23.357 00:07:23.357 ' 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.357 --rc genhtml_branch_coverage=1 00:07:23.357 --rc genhtml_function_coverage=1 00:07:23.357 --rc genhtml_legend=1 00:07:23.357 --rc geninfo_all_blocks=1 00:07:23.357 --rc geninfo_unexecuted_blocks=1 00:07:23.357 00:07:23.357 ' 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.357 --rc genhtml_branch_coverage=1 00:07:23.357 --rc genhtml_function_coverage=1 00:07:23.357 --rc genhtml_legend=1 00:07:23.357 --rc geninfo_all_blocks=1 00:07:23.357 --rc geninfo_unexecuted_blocks=1 00:07:23.357 00:07:23.357 ' 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.357 --rc genhtml_branch_coverage=1 00:07:23.357 --rc genhtml_function_coverage=1 00:07:23.357 --rc genhtml_legend=1 00:07:23.357 --rc geninfo_all_blocks=1 00:07:23.357 --rc geninfo_unexecuted_blocks=1 00:07:23.357 00:07:23.357 ' 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:23.357 11:39:48 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:23.357 11:39:48 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.357 11:39:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.357 ************************************ 00:07:23.357 START TEST raid1_resize_data_offset_test 00:07:23.357 ************************************ 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60233 00:07:23.357 Process raid pid: 60233 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60233' 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60233 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60233 ']' 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.357 11:39:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.357 [2024-11-04 11:39:48.770506] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:23.357 [2024-11-04 11:39:48.770676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.618 [2024-11-04 11:39:48.950539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.618 [2024-11-04 11:39:49.098275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.876 [2024-11-04 11:39:49.362187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.876 [2024-11-04 11:39:49.362232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.187 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.187 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:07:24.187 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:24.187 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.187 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 malloc0 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 malloc1 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 null0 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 [2024-11-04 11:39:49.845841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:24.446 [2024-11-04 11:39:49.847768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:24.446 [2024-11-04 11:39:49.847820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:24.446 [2024-11-04 11:39:49.847991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.446 [2024-11-04 11:39:49.848007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:24.446 [2024-11-04 11:39:49.848361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:24.446 [2024-11-04 11:39:49.848635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.446 [2024-11-04 11:39:49.848682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:24.446 [2024-11-04 11:39:49.848915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.446 [2024-11-04 11:39:49.905795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.446 11:39:49 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.030 malloc2 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.030 [2024-11-04 11:39:50.503175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.030 [2024-11-04 11:39:50.523851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.030 [2024-11-04 11:39:50.526081] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.030 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60233 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60233 ']' 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60233 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60233 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:25.291 killing process with pid 60233 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60233' 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60233 00:07:25.291 [2024-11-04 11:39:50.616551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.291 11:39:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60233 00:07:25.291 [2024-11-04 11:39:50.617706] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:25.291 [2024-11-04 11:39:50.617765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.291 [2024-11-04 11:39:50.617784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:25.291 [2024-11-04 11:39:50.654693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.291 [2024-11-04 11:39:50.655046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.291 [2024-11-04 11:39:50.655073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:27.191 [2024-11-04 11:39:52.646610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.567 11:39:53 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:28.567 00:07:28.567 real 0m5.253s 00:07:28.567 user 0m5.223s 00:07:28.567 sys 0m0.547s 00:07:28.567 11:39:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.567 11:39:53 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 ************************************ 00:07:28.567 END TEST raid1_resize_data_offset_test 00:07:28.567 ************************************ 00:07:28.567 11:39:53 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:28.567 11:39:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:28.567 11:39:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.567 11:39:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 ************************************ 00:07:28.567 START TEST raid0_resize_superblock_test 00:07:28.567 ************************************ 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:28.567 Process raid pid: 60322 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60322 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60322' 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60322 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60322 ']' 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.567 11:39:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 [2024-11-04 11:39:54.024815] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:28.567 [2024-11-04 11:39:54.025374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.826 [2024-11-04 11:39:54.196792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.826 [2024-11-04 11:39:54.342916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.083 [2024-11-04 11:39:54.579159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.083 [2024-11-04 11:39:54.579224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.649 11:39:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.649 11:39:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:29.649 11:39:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:29.649 11:39:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.649 11:39:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 malloc0 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 [2024-11-04 11:39:55.540585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:30.217 [2024-11-04 11:39:55.540650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.217 [2024-11-04 11:39:55.540677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:30.217 [2024-11-04 11:39:55.540688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.217 [2024-11-04 11:39:55.542827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.217 [2024-11-04 11:39:55.542863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:30.217 pt0 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 96392bcb-931c-40ba-8c8b-5d630fbe809c 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 d435bf56-9b80-4309-a0e8-1c451f6e4dfe 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 90ff673f-7542-4025-8627-92c8442a2ffb 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 [2024-11-04 11:39:55.674880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d435bf56-9b80-4309-a0e8-1c451f6e4dfe is claimed 00:07:30.217 [2024-11-04 11:39:55.674972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90ff673f-7542-4025-8627-92c8442a2ffb is claimed 00:07:30.217 [2024-11-04 11:39:55.675090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.217 [2024-11-04 11:39:55.675104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:30.217 [2024-11-04 11:39:55.675420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.217 [2024-11-04 11:39:55.675627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.217 [2024-11-04 11:39:55.675647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:30.217 [2024-11-04 11:39:55.675829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:30.477 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.477 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:30.477 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.477 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 [2024-11-04 11:39:55.782943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 [2024-11-04 11:39:55.830849] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.478 [2024-11-04 11:39:55.830887] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd435bf56-9b80-4309-a0e8-1c451f6e4dfe' was resized: old size 131072, new size 204800 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 [2024-11-04 11:39:55.842803] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:30.478 [2024-11-04 11:39:55.842847] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '90ff673f-7542-4025-8627-92c8442a2ffb' was resized: old size 131072, new size 204800 00:07:30.478 [2024-11-04 11:39:55.842873] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 [2024-11-04 11:39:55.958670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.478 [2024-11-04 11:39:55.986372] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:30.478 [2024-11-04 11:39:55.986457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:30.478 [2024-11-04 11:39:55.986471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.478 [2024-11-04 11:39:55.986489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:30.478 [2024-11-04 11:39:55.986608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.478 [2024-11-04 11:39:55.986645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.478 [2024-11-04 11:39:55.986657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.478 11:39:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.738 [2024-11-04 11:39:55.998280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:30.738 [2024-11-04 11:39:55.998342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.738 [2024-11-04 11:39:55.998367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:30.738 [2024-11-04 11:39:55.998379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.738 [2024-11-04 11:39:56.000852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.738 [2024-11-04 11:39:56.000891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:30.738 [2024-11-04 11:39:56.002812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d435bf56-9b80-4309-a0e8-1c451f6e4dfe 00:07:30.738 [2024-11-04 11:39:56.002920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d435bf56-9b80-4309-a0e8-1c451f6e4dfe is claimed 00:07:30.738 [2024-11-04 11:39:56.003085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 90ff673f-7542-4025-8627-92c8442a2ffb 00:07:30.738 [2024-11-04 11:39:56.003120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 90ff673f-7542-4025-8627-92c8442a2ffb is claimed 00:07:30.738 [2024-11-04 11:39:56.003340] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 90ff673f-7542-4025-8627-92c8442a2ffb (2) smaller than existing raid bdev Raid (3) 00:07:30.738 [2024-11-04 11:39:56.003374] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d435bf56-9b80-4309-a0e8-1c451f6e4dfe: File exists 00:07:30.738 [2024-11-04 11:39:56.003425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:30.738 [2024-11-04 11:39:56.003438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:30.738 [2024-11-04 11:39:56.003716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:30.738 pt0 00:07:30.738 [2024-11-04 11:39:56.003928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:30.738 [2024-11-04 11:39:56.003946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:30.738 [2024-11-04 11:39:56.004148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.738 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.738 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:30.738 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 [2024-11-04 11:39:56.027063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60322 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60322 ']' 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60322 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60322 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:30.739 killing process with pid 60322 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60322' 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60322 00:07:30.739 [2024-11-04 11:39:56.108429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.739 11:39:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60322 00:07:30.739 [2024-11-04 11:39:56.108535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.739 [2024-11-04 11:39:56.108592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.739 [2024-11-04 11:39:56.108603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:32.118 [2024-11-04 11:39:57.594286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.497 11:39:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:33.497 00:07:33.497 real 0m4.794s 00:07:33.497 user 0m5.079s 00:07:33.497 sys 0m0.581s 00:07:33.497 11:39:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.497 11:39:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.497 ************************************ 00:07:33.497 END TEST raid0_resize_superblock_test 00:07:33.497 ************************************ 00:07:33.497 11:39:58 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:33.497 11:39:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:33.497 11:39:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.497 11:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.497 ************************************ 00:07:33.497 START TEST raid1_resize_superblock_test 00:07:33.497 ************************************ 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60421 00:07:33.497 Process raid pid: 60421 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60421' 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60421 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60421 ']' 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.497 11:39:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.497 [2024-11-04 11:39:58.893230] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:33.497 [2024-11-04 11:39:58.893384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.757 [2024-11-04 11:39:59.073587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.757 [2024-11-04 11:39:59.198175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.016 [2024-11-04 11:39:59.403902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.016 [2024-11-04 11:39:59.403955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.274 11:39:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.274 11:39:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:34.274 11:39:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:34.274 11:39:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.274 11:39:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.211 malloc0 00:07:35.211 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.211 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:35.211 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.211 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.211 [2024-11-04 11:40:00.373007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:35.211 [2024-11-04 11:40:00.373075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.211 [2024-11-04 11:40:00.373100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:35.211 [2024-11-04 11:40:00.373112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.212 [2024-11-04 11:40:00.375331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.212 [2024-11-04 11:40:00.375369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:35.212 pt0 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 22e20f8e-ecc7-4b5b-82ca-73524978ac0a 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 11005e80-bb4a-4fd0-a9be-42b9bc9cb465 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 122cc7b7-14c1-4bf8-af27-77c9ed8c69fc 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 [2024-11-04 11:40:00.507436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11005e80-bb4a-4fd0-a9be-42b9bc9cb465 is claimed 00:07:35.212 [2024-11-04 11:40:00.507537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 122cc7b7-14c1-4bf8-af27-77c9ed8c69fc is claimed 00:07:35.212 [2024-11-04 11:40:00.507673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:35.212 [2024-11-04 11:40:00.507689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:35.212 [2024-11-04 11:40:00.507992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:35.212 [2024-11-04 11:40:00.508247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:35.212 [2024-11-04 11:40:00.508269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:35.212 [2024-11-04 11:40:00.508474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:35.212 [2024-11-04 11:40:00.615542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 [2024-11-04 11:40:00.663387] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:35.212 [2024-11-04 11:40:00.663438] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '11005e80-bb4a-4fd0-a9be-42b9bc9cb465' was resized: old size 131072, new size 204800 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 [2024-11-04 11:40:00.675334] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:35.212 [2024-11-04 11:40:00.675376] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '122cc7b7-14c1-4bf8-af27-77c9ed8c69fc' was resized: old size 131072, new size 204800 00:07:35.212 [2024-11-04 11:40:00.675433] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.212 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.472 [2024-11-04 11:40:00.775241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.472 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.472 [2024-11-04 11:40:00.814938] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:35.472 [2024-11-04 11:40:00.815023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:35.472 [2024-11-04 11:40:00.815052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:35.472 [2024-11-04 11:40:00.815216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.472 [2024-11-04 11:40:00.815450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.472 [2024-11-04 11:40:00.815527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.473 [2024-11-04 11:40:00.815542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.473 [2024-11-04 11:40:00.822810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:35.473 [2024-11-04 11:40:00.822869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.473 [2024-11-04 11:40:00.822892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:35.473 [2024-11-04 11:40:00.822905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.473 [2024-11-04 11:40:00.825277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.473 [2024-11-04 11:40:00.825315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:35.473 [2024-11-04 11:40:00.827113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 11005e80-bb4a-4fd0-a9be-42b9bc9cb465 00:07:35.473 [2024-11-04 11:40:00.827217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11005e80-bb4a-4fd0-a9be-42b9bc9cb465 is claimed 00:07:35.473 [2024-11-04 11:40:00.827356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 122cc7b7-14c1-4bf8-af27-77c9ed8c69fc 00:07:35.473 [2024-11-04 11:40:00.827378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 122cc7b7-14c1-4bf8-af27-77c9ed8c69fc is claimed 00:07:35.473 [2024-11-04 11:40:00.827592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 122cc7b7-14c1-4bf8-af27-77c9ed8c69fc (2) smaller than existing raid bdev Raid (3) 00:07:35.473 [2024-11-04 11:40:00.827623] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 11005e80-bb4a-4fd0-a9be-42b9bc9cb465: File exists 00:07:35.473 [2024-11-04 11:40:00.827657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:35.473 [2024-11-04 11:40:00.827669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:35.473 pt0 00:07:35.473 [2024-11-04 11:40:00.827963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:35.473 [2024-11-04 11:40:00.828145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:35.473 [2024-11-04 11:40:00.828157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.473 [2024-11-04 11:40:00.828348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:35.473 [2024-11-04 11:40:00.843382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60421 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60421 ']' 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60421 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60421 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.473 killing process with pid 60421 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60421' 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60421 00:07:35.473 [2024-11-04 11:40:00.917295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.473 [2024-11-04 11:40:00.917426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.473 11:40:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60421 00:07:35.473 [2024-11-04 11:40:00.917495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.473 [2024-11-04 11:40:00.917506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:37.379 [2024-11-04 11:40:02.446004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.317 11:40:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:38.317 00:07:38.317 real 0m4.790s 00:07:38.317 user 0m5.030s 00:07:38.317 sys 0m0.566s 00:07:38.317 11:40:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.317 11:40:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 ************************************ 00:07:38.317 END TEST raid1_resize_superblock_test 00:07:38.317 ************************************ 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:38.317 11:40:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:38.317 11:40:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:38.317 11:40:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.317 11:40:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 ************************************ 00:07:38.317 START TEST raid_function_test_raid0 00:07:38.317 ************************************ 00:07:38.317 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:38.317 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:38.317 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:38.317 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:38.317 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60529 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60529' 00:07:38.318 Process raid pid: 60529 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60529 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60529 ']' 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.318 11:40:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:38.318 [2024-11-04 11:40:03.765928] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:38.318 [2024-11-04 11:40:03.766059] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.577 [2024-11-04 11:40:03.943305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.577 [2024-11-04 11:40:04.066749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.837 [2024-11-04 11:40:04.293250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.837 [2024-11-04 11:40:04.293309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:39.406 Base_1 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:39.406 Base_2 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:39.406 [2024-11-04 11:40:04.723843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:39.406 [2024-11-04 11:40:04.725833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:39.406 [2024-11-04 11:40:04.725918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.406 [2024-11-04 11:40:04.725936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:39.406 [2024-11-04 11:40:04.726246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.406 [2024-11-04 11:40:04.726424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.406 [2024-11-04 11:40:04.726439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:39.406 [2024-11-04 11:40:04.726631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:39.406 11:40:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:39.666 [2024-11-04 11:40:04.971492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:39.666 /dev/nbd0 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.666 1+0 records in 00:07:39.666 1+0 records out 00:07:39.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410412 s, 10.0 MB/s 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:39.666 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.926 { 00:07:39.926 "nbd_device": "/dev/nbd0", 00:07:39.926 "bdev_name": "raid" 00:07:39.926 } 00:07:39.926 ]' 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.926 { 00:07:39.926 "nbd_device": "/dev/nbd0", 00:07:39.926 "bdev_name": "raid" 00:07:39.926 } 00:07:39.926 ]' 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:39.926 4096+0 records in 00:07:39.926 4096+0 records out 00:07:39.926 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0302894 s, 69.2 MB/s 00:07:39.926 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:40.185 4096+0 records in 00:07:40.185 4096+0 records out 00:07:40.185 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.252358 s, 8.3 MB/s 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:40.185 128+0 records in 00:07:40.185 128+0 records out 00:07:40.185 65536 bytes (66 kB, 64 KiB) copied, 0.00125974 s, 52.0 MB/s 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:40.185 2035+0 records in 00:07:40.185 2035+0 records out 00:07:40.185 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0140283 s, 74.3 MB/s 00:07:40.185 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:40.443 456+0 records in 00:07:40.443 456+0 records out 00:07:40.443 233472 bytes (233 kB, 228 KiB) copied, 0.00396612 s, 58.9 MB/s 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.443 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:40.701 11:40:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.701 [2024-11-04 11:40:06.002678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:40.701 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60529 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60529 ']' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60529 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60529 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60529' 00:07:40.959 killing process with pid 60529 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60529 00:07:40.959 [2024-11-04 11:40:06.343746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.959 [2024-11-04 11:40:06.343875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.959 11:40:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60529 00:07:40.959 [2024-11-04 11:40:06.343936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.959 [2024-11-04 11:40:06.343958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:41.217 [2024-11-04 11:40:06.562252] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.590 11:40:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:42.590 00:07:42.590 real 0m4.147s 00:07:42.590 user 0m4.836s 00:07:42.590 sys 0m0.998s 00:07:42.590 11:40:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.590 11:40:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.590 ************************************ 00:07:42.590 END TEST raid_function_test_raid0 00:07:42.590 ************************************ 00:07:42.590 11:40:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:42.590 11:40:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:42.590 11:40:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:42.590 11:40:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.590 ************************************ 00:07:42.590 START TEST raid_function_test_concat 00:07:42.590 ************************************ 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60658 00:07:42.590 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60658' 00:07:42.591 Process raid pid: 60658 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60658 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60658 ']' 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.591 11:40:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:42.591 [2024-11-04 11:40:07.977105] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:42.591 [2024-11-04 11:40:07.977243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.849 [2024-11-04 11:40:08.157401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.849 [2024-11-04 11:40:08.310659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.108 [2024-11-04 11:40:08.584241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.108 [2024-11-04 11:40:08.584295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 Base_1 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.367 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:43.625 Base_2 00:07:43.625 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.625 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:43.625 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.625 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:43.626 [2024-11-04 11:40:08.912093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.626 [2024-11-04 11:40:08.913873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.626 [2024-11-04 11:40:08.913943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.626 [2024-11-04 11:40:08.913958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:43.626 [2024-11-04 11:40:08.914233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:43.626 [2024-11-04 11:40:08.914387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.626 [2024-11-04 11:40:08.914414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:43.626 [2024-11-04 11:40:08.914574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:43.626 11:40:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:43.884 [2024-11-04 11:40:09.175723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:43.884 /dev/nbd0 00:07:43.884 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.885 1+0 records in 00:07:43.885 1+0 records out 00:07:43.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276447 s, 14.8 MB/s 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.885 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.152 { 00:07:44.152 "nbd_device": "/dev/nbd0", 00:07:44.152 "bdev_name": "raid" 00:07:44.152 } 00:07:44.152 ]' 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.152 { 00:07:44.152 "nbd_device": "/dev/nbd0", 00:07:44.152 "bdev_name": "raid" 00:07:44.152 } 00:07:44.152 ]' 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:44.152 4096+0 records in 00:07:44.152 4096+0 records out 00:07:44.152 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0322258 s, 65.1 MB/s 00:07:44.152 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:44.423 4096+0 records in 00:07:44.423 4096+0 records out 00:07:44.423 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213025 s, 9.8 MB/s 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:44.423 128+0 records in 00:07:44.423 128+0 records out 00:07:44.423 65536 bytes (66 kB, 64 KiB) copied, 0.00115383 s, 56.8 MB/s 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:44.423 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:44.424 2035+0 records in 00:07:44.424 2035+0 records out 00:07:44.424 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0135527 s, 76.9 MB/s 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:44.424 456+0 records in 00:07:44.424 456+0 records out 00:07:44.424 233472 bytes (233 kB, 228 KiB) copied, 0.00359692 s, 64.9 MB/s 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.424 11:40:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.682 [2024-11-04 11:40:10.136235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:44.682 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60658 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60658 ']' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60658 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.941 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60658 00:07:45.199 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.199 killing process with pid 60658 00:07:45.199 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.199 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60658' 00:07:45.199 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60658 00:07:45.199 11:40:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60658 00:07:45.199 [2024-11-04 11:40:10.465080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.199 [2024-11-04 11:40:10.465210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.199 [2024-11-04 11:40:10.465279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.199 [2024-11-04 11:40:10.465296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:45.199 [2024-11-04 11:40:10.687431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.574 11:40:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:46.574 00:07:46.574 real 0m3.987s 00:07:46.574 user 0m4.570s 00:07:46.574 sys 0m1.052s 00:07:46.574 11:40:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.574 11:40:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:46.574 ************************************ 00:07:46.574 END TEST raid_function_test_concat 00:07:46.574 ************************************ 00:07:46.574 11:40:11 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:46.574 11:40:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:46.574 11:40:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.574 11:40:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.574 ************************************ 00:07:46.574 START TEST raid0_resize_test 00:07:46.574 ************************************ 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:46.574 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60786 00:07:46.574 Process raid pid: 60786 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60786' 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60786 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60786 ']' 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.575 11:40:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:46.575 [2024-11-04 11:40:12.012800] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:46.575 [2024-11-04 11:40:12.012921] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.833 [2024-11-04 11:40:12.191293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.833 [2024-11-04 11:40:12.312035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.091 [2024-11-04 11:40:12.529266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.091 [2024-11-04 11:40:12.529339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 Base_1 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 Base_2 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-04 11:40:12.911896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:47.658 [2024-11-04 11:40:12.913883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:47.658 [2024-11-04 11:40:12.913958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:47.658 [2024-11-04 11:40:12.913976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:47.658 [2024-11-04 11:40:12.914264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.658 [2024-11-04 11:40:12.914445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:47.658 [2024-11-04 11:40:12.914462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:47.658 [2024-11-04 11:40:12.914654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-04 11:40:12.919836] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.658 [2024-11-04 11:40:12.919866] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:47.658 true 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-04 11:40:12.931990] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-04 11:40:12.979786] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.658 [2024-11-04 11:40:12.979828] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:47.658 [2024-11-04 11:40:12.979860] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:47.658 true 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.658 11:40:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.658 [2024-11-04 11:40:12.995957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60786 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60786 ']' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60786 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60786 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.658 killing process with pid 60786 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60786' 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60786 00:07:47.658 11:40:13 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60786 00:07:47.658 [2024-11-04 11:40:13.066847] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.658 [2024-11-04 11:40:13.066956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.658 [2024-11-04 11:40:13.067025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.658 [2024-11-04 11:40:13.067039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:47.658 [2024-11-04 11:40:13.087455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.093 11:40:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:49.093 00:07:49.093 real 0m2.330s 00:07:49.093 user 0m2.486s 00:07:49.093 sys 0m0.351s 00:07:49.093 11:40:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.093 ************************************ 00:07:49.093 END TEST raid0_resize_test 00:07:49.093 ************************************ 00:07:49.093 11:40:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.093 11:40:14 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:49.093 11:40:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:49.093 11:40:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.093 11:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.093 ************************************ 00:07:49.093 START TEST raid1_resize_test 00:07:49.093 ************************************ 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60842 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60842' 00:07:49.093 Process raid pid: 60842 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60842 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60842 ']' 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.093 11:40:14 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.093 [2024-11-04 11:40:14.387704] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:49.093 [2024-11-04 11:40:14.387830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.093 [2024-11-04 11:40:14.567623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.351 [2024-11-04 11:40:14.687791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.609 [2024-11-04 11:40:14.909836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.609 [2024-11-04 11:40:14.909877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 Base_1 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 Base_2 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 [2024-11-04 11:40:15.276264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:49.868 [2024-11-04 11:40:15.278232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:49.868 [2024-11-04 11:40:15.278307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.868 [2024-11-04 11:40:15.278323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:49.868 [2024-11-04 11:40:15.278660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:49.868 [2024-11-04 11:40:15.278840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.868 [2024-11-04 11:40:15.278856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:49.868 [2024-11-04 11:40:15.279046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 [2024-11-04 11:40:15.284183] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.868 [2024-11-04 11:40:15.284214] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:49.868 true 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 [2024-11-04 11:40:15.296386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 [2024-11-04 11:40:15.344158] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:49.868 [2024-11-04 11:40:15.344194] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:49.868 [2024-11-04 11:40:15.344223] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:49.868 true 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:49.868 [2024-11-04 11:40:15.356372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.868 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60842 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60842 ']' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60842 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60842 00:07:50.127 killing process with pid 60842 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60842' 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60842 00:07:50.127 11:40:15 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60842 00:07:50.127 [2024-11-04 11:40:15.437592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.128 [2024-11-04 11:40:15.437692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.128 [2024-11-04 11:40:15.438257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.128 [2024-11-04 11:40:15.438287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:50.128 [2024-11-04 11:40:15.456783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.517 11:40:16 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:51.517 00:07:51.517 real 0m2.322s 00:07:51.517 user 0m2.502s 00:07:51.517 sys 0m0.307s 00:07:51.517 11:40:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.517 11:40:16 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 ************************************ 00:07:51.517 END TEST raid1_resize_test 00:07:51.517 ************************************ 00:07:51.517 11:40:16 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:51.517 11:40:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:51.517 11:40:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:51.517 11:40:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:51.517 11:40:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.517 11:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 ************************************ 00:07:51.517 START TEST raid_state_function_test 00:07:51.517 ************************************ 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.517 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60905 00:07:51.518 Process raid pid: 60905 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60905' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60905 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60905 ']' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.518 11:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.518 [2024-11-04 11:40:16.783934] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:51.518 [2024-11-04 11:40:16.784067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.518 [2024-11-04 11:40:16.955284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.776 [2024-11-04 11:40:17.077698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.776 [2024-11-04 11:40:17.296116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.776 [2024-11-04 11:40:17.296170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.342 [2024-11-04 11:40:17.667993] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.342 [2024-11-04 11:40:17.668069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.342 [2024-11-04 11:40:17.668082] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.342 [2024-11-04 11:40:17.668093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.342 "name": "Existed_Raid", 00:07:52.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.342 "strip_size_kb": 64, 00:07:52.342 "state": "configuring", 00:07:52.342 "raid_level": "raid0", 00:07:52.342 "superblock": false, 00:07:52.342 "num_base_bdevs": 2, 00:07:52.342 "num_base_bdevs_discovered": 0, 00:07:52.342 "num_base_bdevs_operational": 2, 00:07:52.342 "base_bdevs_list": [ 00:07:52.342 { 00:07:52.342 "name": "BaseBdev1", 00:07:52.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.342 "is_configured": false, 00:07:52.342 "data_offset": 0, 00:07:52.342 "data_size": 0 00:07:52.342 }, 00:07:52.342 { 00:07:52.342 "name": "BaseBdev2", 00:07:52.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.342 "is_configured": false, 00:07:52.342 "data_offset": 0, 00:07:52.342 "data_size": 0 00:07:52.342 } 00:07:52.342 ] 00:07:52.342 }' 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.342 11:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.601 [2024-11-04 11:40:18.047304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.601 [2024-11-04 11:40:18.047342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.601 [2024-11-04 11:40:18.059287] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.601 [2024-11-04 11:40:18.059332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.601 [2024-11-04 11:40:18.059343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.601 [2024-11-04 11:40:18.059356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.601 [2024-11-04 11:40:18.109590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.601 BaseBdev1 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.601 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.860 [ 00:07:52.860 { 00:07:52.860 "name": "BaseBdev1", 00:07:52.860 "aliases": [ 00:07:52.860 "bdbdfaca-615f-4660-a0bb-069c7ad633f5" 00:07:52.860 ], 00:07:52.860 "product_name": "Malloc disk", 00:07:52.860 "block_size": 512, 00:07:52.860 "num_blocks": 65536, 00:07:52.860 "uuid": "bdbdfaca-615f-4660-a0bb-069c7ad633f5", 00:07:52.860 "assigned_rate_limits": { 00:07:52.860 "rw_ios_per_sec": 0, 00:07:52.860 "rw_mbytes_per_sec": 0, 00:07:52.860 "r_mbytes_per_sec": 0, 00:07:52.860 "w_mbytes_per_sec": 0 00:07:52.860 }, 00:07:52.860 "claimed": true, 00:07:52.860 "claim_type": "exclusive_write", 00:07:52.860 "zoned": false, 00:07:52.860 "supported_io_types": { 00:07:52.860 "read": true, 00:07:52.860 "write": true, 00:07:52.860 "unmap": true, 00:07:52.860 "flush": true, 00:07:52.860 "reset": true, 00:07:52.860 "nvme_admin": false, 00:07:52.860 "nvme_io": false, 00:07:52.860 "nvme_io_md": false, 00:07:52.860 "write_zeroes": true, 00:07:52.860 "zcopy": true, 00:07:52.860 "get_zone_info": false, 00:07:52.860 "zone_management": false, 00:07:52.860 "zone_append": false, 00:07:52.860 "compare": false, 00:07:52.860 "compare_and_write": false, 00:07:52.860 "abort": true, 00:07:52.860 "seek_hole": false, 00:07:52.860 "seek_data": false, 00:07:52.860 "copy": true, 00:07:52.860 "nvme_iov_md": false 00:07:52.860 }, 00:07:52.860 "memory_domains": [ 00:07:52.860 { 00:07:52.860 "dma_device_id": "system", 00:07:52.860 "dma_device_type": 1 00:07:52.860 }, 00:07:52.860 { 00:07:52.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.860 "dma_device_type": 2 00:07:52.860 } 00:07:52.860 ], 00:07:52.860 "driver_specific": {} 00:07:52.860 } 00:07:52.860 ] 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.860 "name": "Existed_Raid", 00:07:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.860 "strip_size_kb": 64, 00:07:52.860 "state": "configuring", 00:07:52.860 "raid_level": "raid0", 00:07:52.860 "superblock": false, 00:07:52.860 "num_base_bdevs": 2, 00:07:52.860 "num_base_bdevs_discovered": 1, 00:07:52.860 "num_base_bdevs_operational": 2, 00:07:52.860 "base_bdevs_list": [ 00:07:52.860 { 00:07:52.860 "name": "BaseBdev1", 00:07:52.860 "uuid": "bdbdfaca-615f-4660-a0bb-069c7ad633f5", 00:07:52.860 "is_configured": true, 00:07:52.860 "data_offset": 0, 00:07:52.860 "data_size": 65536 00:07:52.860 }, 00:07:52.860 { 00:07:52.860 "name": "BaseBdev2", 00:07:52.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.860 "is_configured": false, 00:07:52.860 "data_offset": 0, 00:07:52.860 "data_size": 0 00:07:52.860 } 00:07:52.860 ] 00:07:52.860 }' 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.860 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.120 [2024-11-04 11:40:18.628764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.120 [2024-11-04 11:40:18.628829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.120 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.120 [2024-11-04 11:40:18.640811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.379 [2024-11-04 11:40:18.642774] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.379 [2024-11-04 11:40:18.642814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.379 "name": "Existed_Raid", 00:07:53.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.379 "strip_size_kb": 64, 00:07:53.379 "state": "configuring", 00:07:53.379 "raid_level": "raid0", 00:07:53.379 "superblock": false, 00:07:53.379 "num_base_bdevs": 2, 00:07:53.379 "num_base_bdevs_discovered": 1, 00:07:53.379 "num_base_bdevs_operational": 2, 00:07:53.379 "base_bdevs_list": [ 00:07:53.379 { 00:07:53.379 "name": "BaseBdev1", 00:07:53.379 "uuid": "bdbdfaca-615f-4660-a0bb-069c7ad633f5", 00:07:53.379 "is_configured": true, 00:07:53.379 "data_offset": 0, 00:07:53.379 "data_size": 65536 00:07:53.379 }, 00:07:53.379 { 00:07:53.379 "name": "BaseBdev2", 00:07:53.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.379 "is_configured": false, 00:07:53.379 "data_offset": 0, 00:07:53.379 "data_size": 0 00:07:53.379 } 00:07:53.379 ] 00:07:53.379 }' 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.379 11:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.638 [2024-11-04 11:40:19.138663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.638 [2024-11-04 11:40:19.138722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.638 [2024-11-04 11:40:19.138732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:53.638 [2024-11-04 11:40:19.139023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.638 [2024-11-04 11:40:19.139220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.638 [2024-11-04 11:40:19.139244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:53.638 [2024-11-04 11:40:19.139557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.638 BaseBdev2 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.638 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.897 [ 00:07:53.897 { 00:07:53.897 "name": "BaseBdev2", 00:07:53.897 "aliases": [ 00:07:53.897 "e91c4e6f-2926-43cb-aea1-cae1c7df1338" 00:07:53.897 ], 00:07:53.897 "product_name": "Malloc disk", 00:07:53.897 "block_size": 512, 00:07:53.897 "num_blocks": 65536, 00:07:53.897 "uuid": "e91c4e6f-2926-43cb-aea1-cae1c7df1338", 00:07:53.897 "assigned_rate_limits": { 00:07:53.897 "rw_ios_per_sec": 0, 00:07:53.897 "rw_mbytes_per_sec": 0, 00:07:53.897 "r_mbytes_per_sec": 0, 00:07:53.897 "w_mbytes_per_sec": 0 00:07:53.897 }, 00:07:53.897 "claimed": true, 00:07:53.897 "claim_type": "exclusive_write", 00:07:53.897 "zoned": false, 00:07:53.898 "supported_io_types": { 00:07:53.898 "read": true, 00:07:53.898 "write": true, 00:07:53.898 "unmap": true, 00:07:53.898 "flush": true, 00:07:53.898 "reset": true, 00:07:53.898 "nvme_admin": false, 00:07:53.898 "nvme_io": false, 00:07:53.898 "nvme_io_md": false, 00:07:53.898 "write_zeroes": true, 00:07:53.898 "zcopy": true, 00:07:53.898 "get_zone_info": false, 00:07:53.898 "zone_management": false, 00:07:53.898 "zone_append": false, 00:07:53.898 "compare": false, 00:07:53.898 "compare_and_write": false, 00:07:53.898 "abort": true, 00:07:53.898 "seek_hole": false, 00:07:53.898 "seek_data": false, 00:07:53.898 "copy": true, 00:07:53.898 "nvme_iov_md": false 00:07:53.898 }, 00:07:53.898 "memory_domains": [ 00:07:53.898 { 00:07:53.898 "dma_device_id": "system", 00:07:53.898 "dma_device_type": 1 00:07:53.898 }, 00:07:53.898 { 00:07:53.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.898 "dma_device_type": 2 00:07:53.898 } 00:07:53.898 ], 00:07:53.898 "driver_specific": {} 00:07:53.898 } 00:07:53.898 ] 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.898 "name": "Existed_Raid", 00:07:53.898 "uuid": "753f3ff1-6bd8-4c38-bba8-b4bef943325b", 00:07:53.898 "strip_size_kb": 64, 00:07:53.898 "state": "online", 00:07:53.898 "raid_level": "raid0", 00:07:53.898 "superblock": false, 00:07:53.898 "num_base_bdevs": 2, 00:07:53.898 "num_base_bdevs_discovered": 2, 00:07:53.898 "num_base_bdevs_operational": 2, 00:07:53.898 "base_bdevs_list": [ 00:07:53.898 { 00:07:53.898 "name": "BaseBdev1", 00:07:53.898 "uuid": "bdbdfaca-615f-4660-a0bb-069c7ad633f5", 00:07:53.898 "is_configured": true, 00:07:53.898 "data_offset": 0, 00:07:53.898 "data_size": 65536 00:07:53.898 }, 00:07:53.898 { 00:07:53.898 "name": "BaseBdev2", 00:07:53.898 "uuid": "e91c4e6f-2926-43cb-aea1-cae1c7df1338", 00:07:53.898 "is_configured": true, 00:07:53.898 "data_offset": 0, 00:07:53.898 "data_size": 65536 00:07:53.898 } 00:07:53.898 ] 00:07:53.898 }' 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.898 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.156 [2024-11-04 11:40:19.598228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.156 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.156 "name": "Existed_Raid", 00:07:54.156 "aliases": [ 00:07:54.156 "753f3ff1-6bd8-4c38-bba8-b4bef943325b" 00:07:54.156 ], 00:07:54.156 "product_name": "Raid Volume", 00:07:54.156 "block_size": 512, 00:07:54.156 "num_blocks": 131072, 00:07:54.156 "uuid": "753f3ff1-6bd8-4c38-bba8-b4bef943325b", 00:07:54.156 "assigned_rate_limits": { 00:07:54.156 "rw_ios_per_sec": 0, 00:07:54.156 "rw_mbytes_per_sec": 0, 00:07:54.156 "r_mbytes_per_sec": 0, 00:07:54.156 "w_mbytes_per_sec": 0 00:07:54.156 }, 00:07:54.156 "claimed": false, 00:07:54.156 "zoned": false, 00:07:54.156 "supported_io_types": { 00:07:54.156 "read": true, 00:07:54.156 "write": true, 00:07:54.156 "unmap": true, 00:07:54.156 "flush": true, 00:07:54.157 "reset": true, 00:07:54.157 "nvme_admin": false, 00:07:54.157 "nvme_io": false, 00:07:54.157 "nvme_io_md": false, 00:07:54.157 "write_zeroes": true, 00:07:54.157 "zcopy": false, 00:07:54.157 "get_zone_info": false, 00:07:54.157 "zone_management": false, 00:07:54.157 "zone_append": false, 00:07:54.157 "compare": false, 00:07:54.157 "compare_and_write": false, 00:07:54.157 "abort": false, 00:07:54.157 "seek_hole": false, 00:07:54.157 "seek_data": false, 00:07:54.157 "copy": false, 00:07:54.157 "nvme_iov_md": false 00:07:54.157 }, 00:07:54.157 "memory_domains": [ 00:07:54.157 { 00:07:54.157 "dma_device_id": "system", 00:07:54.157 "dma_device_type": 1 00:07:54.157 }, 00:07:54.157 { 00:07:54.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.157 "dma_device_type": 2 00:07:54.157 }, 00:07:54.157 { 00:07:54.157 "dma_device_id": "system", 00:07:54.157 "dma_device_type": 1 00:07:54.157 }, 00:07:54.157 { 00:07:54.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.157 "dma_device_type": 2 00:07:54.157 } 00:07:54.157 ], 00:07:54.157 "driver_specific": { 00:07:54.157 "raid": { 00:07:54.157 "uuid": "753f3ff1-6bd8-4c38-bba8-b4bef943325b", 00:07:54.157 "strip_size_kb": 64, 00:07:54.157 "state": "online", 00:07:54.157 "raid_level": "raid0", 00:07:54.157 "superblock": false, 00:07:54.157 "num_base_bdevs": 2, 00:07:54.157 "num_base_bdevs_discovered": 2, 00:07:54.157 "num_base_bdevs_operational": 2, 00:07:54.157 "base_bdevs_list": [ 00:07:54.157 { 00:07:54.157 "name": "BaseBdev1", 00:07:54.157 "uuid": "bdbdfaca-615f-4660-a0bb-069c7ad633f5", 00:07:54.157 "is_configured": true, 00:07:54.157 "data_offset": 0, 00:07:54.157 "data_size": 65536 00:07:54.157 }, 00:07:54.157 { 00:07:54.157 "name": "BaseBdev2", 00:07:54.157 "uuid": "e91c4e6f-2926-43cb-aea1-cae1c7df1338", 00:07:54.157 "is_configured": true, 00:07:54.157 "data_offset": 0, 00:07:54.157 "data_size": 65536 00:07:54.157 } 00:07:54.157 ] 00:07:54.157 } 00:07:54.157 } 00:07:54.157 }' 00:07:54.157 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:54.416 BaseBdev2' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.416 [2024-11-04 11:40:19.817641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.416 [2024-11-04 11:40:19.817679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.416 [2024-11-04 11:40:19.817736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.416 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.675 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.675 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.675 "name": "Existed_Raid", 00:07:54.675 "uuid": "753f3ff1-6bd8-4c38-bba8-b4bef943325b", 00:07:54.675 "strip_size_kb": 64, 00:07:54.675 "state": "offline", 00:07:54.675 "raid_level": "raid0", 00:07:54.675 "superblock": false, 00:07:54.675 "num_base_bdevs": 2, 00:07:54.675 "num_base_bdevs_discovered": 1, 00:07:54.675 "num_base_bdevs_operational": 1, 00:07:54.675 "base_bdevs_list": [ 00:07:54.675 { 00:07:54.675 "name": null, 00:07:54.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.675 "is_configured": false, 00:07:54.675 "data_offset": 0, 00:07:54.675 "data_size": 65536 00:07:54.675 }, 00:07:54.675 { 00:07:54.675 "name": "BaseBdev2", 00:07:54.675 "uuid": "e91c4e6f-2926-43cb-aea1-cae1c7df1338", 00:07:54.675 "is_configured": true, 00:07:54.675 "data_offset": 0, 00:07:54.675 "data_size": 65536 00:07:54.675 } 00:07:54.675 ] 00:07:54.675 }' 00:07:54.675 11:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.675 11:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.934 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.193 [2024-11-04 11:40:20.465755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:55.193 [2024-11-04 11:40:20.465816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60905 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60905 ']' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60905 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60905 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.193 killing process with pid 60905 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60905' 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60905 00:07:55.193 [2024-11-04 11:40:20.655455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.193 11:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60905 00:07:55.193 [2024-11-04 11:40:20.673761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.570 00:07:56.570 real 0m5.125s 00:07:56.570 user 0m7.451s 00:07:56.570 sys 0m0.785s 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.570 ************************************ 00:07:56.570 END TEST raid_state_function_test 00:07:56.570 ************************************ 00:07:56.570 11:40:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:56.570 11:40:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:56.570 11:40:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.570 11:40:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.570 ************************************ 00:07:56.570 START TEST raid_state_function_test_sb 00:07:56.570 ************************************ 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61158 00:07:56.570 Process raid pid: 61158 00:07:56.570 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61158' 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61158 00:07:56.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61158 ']' 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.571 11:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.571 [2024-11-04 11:40:21.998205] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:07:56.571 [2024-11-04 11:40:21.998360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.883 [2024-11-04 11:40:22.179066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.883 [2024-11-04 11:40:22.294908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.157 [2024-11-04 11:40:22.508831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.157 [2024-11-04 11:40:22.508893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.415 [2024-11-04 11:40:22.875496] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.415 [2024-11-04 11:40:22.875549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.415 [2024-11-04 11:40:22.875560] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.415 [2024-11-04 11:40:22.875569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.415 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.416 "name": "Existed_Raid", 00:07:57.416 "uuid": "f180cfc8-440b-4f6c-bc4c-638ca01b1677", 00:07:57.416 "strip_size_kb": 64, 00:07:57.416 "state": "configuring", 00:07:57.416 "raid_level": "raid0", 00:07:57.416 "superblock": true, 00:07:57.416 "num_base_bdevs": 2, 00:07:57.416 "num_base_bdevs_discovered": 0, 00:07:57.416 "num_base_bdevs_operational": 2, 00:07:57.416 "base_bdevs_list": [ 00:07:57.416 { 00:07:57.416 "name": "BaseBdev1", 00:07:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.416 "is_configured": false, 00:07:57.416 "data_offset": 0, 00:07:57.416 "data_size": 0 00:07:57.416 }, 00:07:57.416 { 00:07:57.416 "name": "BaseBdev2", 00:07:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.416 "is_configured": false, 00:07:57.416 "data_offset": 0, 00:07:57.416 "data_size": 0 00:07:57.416 } 00:07:57.416 ] 00:07:57.416 }' 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.416 11:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 [2024-11-04 11:40:23.270765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.983 [2024-11-04 11:40:23.270872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 [2024-11-04 11:40:23.282727] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.983 [2024-11-04 11:40:23.282810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.983 [2024-11-04 11:40:23.282839] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.983 [2024-11-04 11:40:23.282864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.983 [2024-11-04 11:40:23.329915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.983 BaseBdev1 00:07:57.983 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.984 [ 00:07:57.984 { 00:07:57.984 "name": "BaseBdev1", 00:07:57.984 "aliases": [ 00:07:57.984 "348d00f9-ffa5-4506-8851-694725216d9d" 00:07:57.984 ], 00:07:57.984 "product_name": "Malloc disk", 00:07:57.984 "block_size": 512, 00:07:57.984 "num_blocks": 65536, 00:07:57.984 "uuid": "348d00f9-ffa5-4506-8851-694725216d9d", 00:07:57.984 "assigned_rate_limits": { 00:07:57.984 "rw_ios_per_sec": 0, 00:07:57.984 "rw_mbytes_per_sec": 0, 00:07:57.984 "r_mbytes_per_sec": 0, 00:07:57.984 "w_mbytes_per_sec": 0 00:07:57.984 }, 00:07:57.984 "claimed": true, 00:07:57.984 "claim_type": "exclusive_write", 00:07:57.984 "zoned": false, 00:07:57.984 "supported_io_types": { 00:07:57.984 "read": true, 00:07:57.984 "write": true, 00:07:57.984 "unmap": true, 00:07:57.984 "flush": true, 00:07:57.984 "reset": true, 00:07:57.984 "nvme_admin": false, 00:07:57.984 "nvme_io": false, 00:07:57.984 "nvme_io_md": false, 00:07:57.984 "write_zeroes": true, 00:07:57.984 "zcopy": true, 00:07:57.984 "get_zone_info": false, 00:07:57.984 "zone_management": false, 00:07:57.984 "zone_append": false, 00:07:57.984 "compare": false, 00:07:57.984 "compare_and_write": false, 00:07:57.984 "abort": true, 00:07:57.984 "seek_hole": false, 00:07:57.984 "seek_data": false, 00:07:57.984 "copy": true, 00:07:57.984 "nvme_iov_md": false 00:07:57.984 }, 00:07:57.984 "memory_domains": [ 00:07:57.984 { 00:07:57.984 "dma_device_id": "system", 00:07:57.984 "dma_device_type": 1 00:07:57.984 }, 00:07:57.984 { 00:07:57.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.984 "dma_device_type": 2 00:07:57.984 } 00:07:57.984 ], 00:07:57.984 "driver_specific": {} 00:07:57.984 } 00:07:57.984 ] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.984 "name": "Existed_Raid", 00:07:57.984 "uuid": "48784046-9198-4fc0-85a3-aacb66dc114d", 00:07:57.984 "strip_size_kb": 64, 00:07:57.984 "state": "configuring", 00:07:57.984 "raid_level": "raid0", 00:07:57.984 "superblock": true, 00:07:57.984 "num_base_bdevs": 2, 00:07:57.984 "num_base_bdevs_discovered": 1, 00:07:57.984 "num_base_bdevs_operational": 2, 00:07:57.984 "base_bdevs_list": [ 00:07:57.984 { 00:07:57.984 "name": "BaseBdev1", 00:07:57.984 "uuid": "348d00f9-ffa5-4506-8851-694725216d9d", 00:07:57.984 "is_configured": true, 00:07:57.984 "data_offset": 2048, 00:07:57.984 "data_size": 63488 00:07:57.984 }, 00:07:57.984 { 00:07:57.984 "name": "BaseBdev2", 00:07:57.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.984 "is_configured": false, 00:07:57.984 "data_offset": 0, 00:07:57.984 "data_size": 0 00:07:57.984 } 00:07:57.984 ] 00:07:57.984 }' 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.984 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.552 [2024-11-04 11:40:23.821210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.552 [2024-11-04 11:40:23.821350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.552 [2024-11-04 11:40:23.829280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.552 [2024-11-04 11:40:23.831397] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.552 [2024-11-04 11:40:23.831460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.552 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.553 "name": "Existed_Raid", 00:07:58.553 "uuid": "2b7a7412-e735-45d5-a103-ba8c4265774e", 00:07:58.553 "strip_size_kb": 64, 00:07:58.553 "state": "configuring", 00:07:58.553 "raid_level": "raid0", 00:07:58.553 "superblock": true, 00:07:58.553 "num_base_bdevs": 2, 00:07:58.553 "num_base_bdevs_discovered": 1, 00:07:58.553 "num_base_bdevs_operational": 2, 00:07:58.553 "base_bdevs_list": [ 00:07:58.553 { 00:07:58.553 "name": "BaseBdev1", 00:07:58.553 "uuid": "348d00f9-ffa5-4506-8851-694725216d9d", 00:07:58.553 "is_configured": true, 00:07:58.553 "data_offset": 2048, 00:07:58.553 "data_size": 63488 00:07:58.553 }, 00:07:58.553 { 00:07:58.553 "name": "BaseBdev2", 00:07:58.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.553 "is_configured": false, 00:07:58.553 "data_offset": 0, 00:07:58.553 "data_size": 0 00:07:58.553 } 00:07:58.553 ] 00:07:58.553 }' 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.553 11:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.813 [2024-11-04 11:40:24.266847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.813 [2024-11-04 11:40:24.267275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.813 [2024-11-04 11:40:24.267332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.813 [2024-11-04 11:40:24.267695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.813 BaseBdev2 00:07:58.813 [2024-11-04 11:40:24.267938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.813 [2024-11-04 11:40:24.267957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.813 [2024-11-04 11:40:24.268161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.813 [ 00:07:58.813 { 00:07:58.813 "name": "BaseBdev2", 00:07:58.813 "aliases": [ 00:07:58.813 "c2eb6586-14d5-4b4b-bc57-f65855f284aa" 00:07:58.813 ], 00:07:58.813 "product_name": "Malloc disk", 00:07:58.813 "block_size": 512, 00:07:58.813 "num_blocks": 65536, 00:07:58.813 "uuid": "c2eb6586-14d5-4b4b-bc57-f65855f284aa", 00:07:58.813 "assigned_rate_limits": { 00:07:58.813 "rw_ios_per_sec": 0, 00:07:58.813 "rw_mbytes_per_sec": 0, 00:07:58.813 "r_mbytes_per_sec": 0, 00:07:58.813 "w_mbytes_per_sec": 0 00:07:58.813 }, 00:07:58.813 "claimed": true, 00:07:58.813 "claim_type": "exclusive_write", 00:07:58.813 "zoned": false, 00:07:58.813 "supported_io_types": { 00:07:58.813 "read": true, 00:07:58.813 "write": true, 00:07:58.813 "unmap": true, 00:07:58.813 "flush": true, 00:07:58.813 "reset": true, 00:07:58.813 "nvme_admin": false, 00:07:58.813 "nvme_io": false, 00:07:58.813 "nvme_io_md": false, 00:07:58.813 "write_zeroes": true, 00:07:58.813 "zcopy": true, 00:07:58.813 "get_zone_info": false, 00:07:58.813 "zone_management": false, 00:07:58.813 "zone_append": false, 00:07:58.813 "compare": false, 00:07:58.813 "compare_and_write": false, 00:07:58.813 "abort": true, 00:07:58.813 "seek_hole": false, 00:07:58.813 "seek_data": false, 00:07:58.813 "copy": true, 00:07:58.813 "nvme_iov_md": false 00:07:58.813 }, 00:07:58.813 "memory_domains": [ 00:07:58.813 { 00:07:58.813 "dma_device_id": "system", 00:07:58.813 "dma_device_type": 1 00:07:58.813 }, 00:07:58.813 { 00:07:58.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.813 "dma_device_type": 2 00:07:58.813 } 00:07:58.813 ], 00:07:58.813 "driver_specific": {} 00:07:58.813 } 00:07:58.813 ] 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.813 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.072 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.072 "name": "Existed_Raid", 00:07:59.072 "uuid": "2b7a7412-e735-45d5-a103-ba8c4265774e", 00:07:59.072 "strip_size_kb": 64, 00:07:59.072 "state": "online", 00:07:59.072 "raid_level": "raid0", 00:07:59.072 "superblock": true, 00:07:59.072 "num_base_bdevs": 2, 00:07:59.072 "num_base_bdevs_discovered": 2, 00:07:59.072 "num_base_bdevs_operational": 2, 00:07:59.072 "base_bdevs_list": [ 00:07:59.072 { 00:07:59.072 "name": "BaseBdev1", 00:07:59.072 "uuid": "348d00f9-ffa5-4506-8851-694725216d9d", 00:07:59.072 "is_configured": true, 00:07:59.072 "data_offset": 2048, 00:07:59.072 "data_size": 63488 00:07:59.072 }, 00:07:59.072 { 00:07:59.072 "name": "BaseBdev2", 00:07:59.072 "uuid": "c2eb6586-14d5-4b4b-bc57-f65855f284aa", 00:07:59.072 "is_configured": true, 00:07:59.072 "data_offset": 2048, 00:07:59.072 "data_size": 63488 00:07:59.072 } 00:07:59.072 ] 00:07:59.072 }' 00:07:59.072 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.072 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.332 [2024-11-04 11:40:24.706442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.332 "name": "Existed_Raid", 00:07:59.332 "aliases": [ 00:07:59.332 "2b7a7412-e735-45d5-a103-ba8c4265774e" 00:07:59.332 ], 00:07:59.332 "product_name": "Raid Volume", 00:07:59.332 "block_size": 512, 00:07:59.332 "num_blocks": 126976, 00:07:59.332 "uuid": "2b7a7412-e735-45d5-a103-ba8c4265774e", 00:07:59.332 "assigned_rate_limits": { 00:07:59.332 "rw_ios_per_sec": 0, 00:07:59.332 "rw_mbytes_per_sec": 0, 00:07:59.332 "r_mbytes_per_sec": 0, 00:07:59.332 "w_mbytes_per_sec": 0 00:07:59.332 }, 00:07:59.332 "claimed": false, 00:07:59.332 "zoned": false, 00:07:59.332 "supported_io_types": { 00:07:59.332 "read": true, 00:07:59.332 "write": true, 00:07:59.332 "unmap": true, 00:07:59.332 "flush": true, 00:07:59.332 "reset": true, 00:07:59.332 "nvme_admin": false, 00:07:59.332 "nvme_io": false, 00:07:59.332 "nvme_io_md": false, 00:07:59.332 "write_zeroes": true, 00:07:59.332 "zcopy": false, 00:07:59.332 "get_zone_info": false, 00:07:59.332 "zone_management": false, 00:07:59.332 "zone_append": false, 00:07:59.332 "compare": false, 00:07:59.332 "compare_and_write": false, 00:07:59.332 "abort": false, 00:07:59.332 "seek_hole": false, 00:07:59.332 "seek_data": false, 00:07:59.332 "copy": false, 00:07:59.332 "nvme_iov_md": false 00:07:59.332 }, 00:07:59.332 "memory_domains": [ 00:07:59.332 { 00:07:59.332 "dma_device_id": "system", 00:07:59.332 "dma_device_type": 1 00:07:59.332 }, 00:07:59.332 { 00:07:59.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.332 "dma_device_type": 2 00:07:59.332 }, 00:07:59.332 { 00:07:59.332 "dma_device_id": "system", 00:07:59.332 "dma_device_type": 1 00:07:59.332 }, 00:07:59.332 { 00:07:59.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.332 "dma_device_type": 2 00:07:59.332 } 00:07:59.332 ], 00:07:59.332 "driver_specific": { 00:07:59.332 "raid": { 00:07:59.332 "uuid": "2b7a7412-e735-45d5-a103-ba8c4265774e", 00:07:59.332 "strip_size_kb": 64, 00:07:59.332 "state": "online", 00:07:59.332 "raid_level": "raid0", 00:07:59.332 "superblock": true, 00:07:59.332 "num_base_bdevs": 2, 00:07:59.332 "num_base_bdevs_discovered": 2, 00:07:59.332 "num_base_bdevs_operational": 2, 00:07:59.332 "base_bdevs_list": [ 00:07:59.332 { 00:07:59.332 "name": "BaseBdev1", 00:07:59.332 "uuid": "348d00f9-ffa5-4506-8851-694725216d9d", 00:07:59.332 "is_configured": true, 00:07:59.332 "data_offset": 2048, 00:07:59.332 "data_size": 63488 00:07:59.332 }, 00:07:59.332 { 00:07:59.332 "name": "BaseBdev2", 00:07:59.332 "uuid": "c2eb6586-14d5-4b4b-bc57-f65855f284aa", 00:07:59.332 "is_configured": true, 00:07:59.332 "data_offset": 2048, 00:07:59.332 "data_size": 63488 00:07:59.332 } 00:07:59.332 ] 00:07:59.332 } 00:07:59.332 } 00:07:59.332 }' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.332 BaseBdev2' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.332 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.591 11:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 [2024-11-04 11:40:24.953768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.591 [2024-11-04 11:40:24.953847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.591 [2024-11-04 11:40:24.953907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.591 "name": "Existed_Raid", 00:07:59.591 "uuid": "2b7a7412-e735-45d5-a103-ba8c4265774e", 00:07:59.591 "strip_size_kb": 64, 00:07:59.591 "state": "offline", 00:07:59.591 "raid_level": "raid0", 00:07:59.591 "superblock": true, 00:07:59.591 "num_base_bdevs": 2, 00:07:59.591 "num_base_bdevs_discovered": 1, 00:07:59.591 "num_base_bdevs_operational": 1, 00:07:59.591 "base_bdevs_list": [ 00:07:59.591 { 00:07:59.591 "name": null, 00:07:59.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.591 "is_configured": false, 00:07:59.591 "data_offset": 0, 00:07:59.591 "data_size": 63488 00:07:59.591 }, 00:07:59.591 { 00:07:59.591 "name": "BaseBdev2", 00:07:59.591 "uuid": "c2eb6586-14d5-4b4b-bc57-f65855f284aa", 00:07:59.591 "is_configured": true, 00:07:59.591 "data_offset": 2048, 00:07:59.591 "data_size": 63488 00:07:59.591 } 00:07:59.591 ] 00:07:59.591 }' 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.591 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.171 [2024-11-04 11:40:25.574653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.171 [2024-11-04 11:40:25.574710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.171 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61158 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61158 ']' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61158 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61158 00:08:00.431 killing process with pid 61158 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61158' 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61158 00:08:00.431 [2024-11-04 11:40:25.750624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.431 11:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61158 00:08:00.431 [2024-11-04 11:40:25.768304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.809 11:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.809 00:08:01.809 real 0m5.076s 00:08:01.809 user 0m7.279s 00:08:01.809 sys 0m0.791s 00:08:01.809 11:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.809 11:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.809 ************************************ 00:08:01.809 END TEST raid_state_function_test_sb 00:08:01.809 ************************************ 00:08:01.809 11:40:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:01.809 11:40:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:01.809 11:40:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.809 11:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.809 ************************************ 00:08:01.809 START TEST raid_superblock_test 00:08:01.809 ************************************ 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61410 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61410 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61410 ']' 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.809 11:40:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.809 [2024-11-04 11:40:27.128127] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:01.809 [2024-11-04 11:40:27.128344] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61410 ] 00:08:01.809 [2024-11-04 11:40:27.305026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.069 [2024-11-04 11:40:27.422138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.328 [2024-11-04 11:40:27.635316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.328 [2024-11-04 11:40:27.635489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.586 malloc1 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.586 [2024-11-04 11:40:28.068329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.586 [2024-11-04 11:40:28.068527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.586 [2024-11-04 11:40:28.068583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:02.586 [2024-11-04 11:40:28.068644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.586 [2024-11-04 11:40:28.071190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.586 [2024-11-04 11:40:28.071291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.586 pt1 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.586 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.846 malloc2 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.846 [2024-11-04 11:40:28.128471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.846 [2024-11-04 11:40:28.128536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.846 [2024-11-04 11:40:28.128562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:02.846 [2024-11-04 11:40:28.128572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.846 [2024-11-04 11:40:28.130843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.846 [2024-11-04 11:40:28.130881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.846 pt2 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.846 [2024-11-04 11:40:28.140522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.846 [2024-11-04 11:40:28.142513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.846 [2024-11-04 11:40:28.142692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:02.846 [2024-11-04 11:40:28.142707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.846 [2024-11-04 11:40:28.142991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.846 [2024-11-04 11:40:28.143155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:02.846 [2024-11-04 11:40:28.143167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:02.846 [2024-11-04 11:40:28.143345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.846 "name": "raid_bdev1", 00:08:02.846 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:02.846 "strip_size_kb": 64, 00:08:02.846 "state": "online", 00:08:02.846 "raid_level": "raid0", 00:08:02.846 "superblock": true, 00:08:02.846 "num_base_bdevs": 2, 00:08:02.846 "num_base_bdevs_discovered": 2, 00:08:02.846 "num_base_bdevs_operational": 2, 00:08:02.846 "base_bdevs_list": [ 00:08:02.846 { 00:08:02.846 "name": "pt1", 00:08:02.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.846 "is_configured": true, 00:08:02.846 "data_offset": 2048, 00:08:02.846 "data_size": 63488 00:08:02.846 }, 00:08:02.846 { 00:08:02.846 "name": "pt2", 00:08:02.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.846 "is_configured": true, 00:08:02.846 "data_offset": 2048, 00:08:02.846 "data_size": 63488 00:08:02.846 } 00:08:02.846 ] 00:08:02.846 }' 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.846 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.105 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.105 [2024-11-04 11:40:28.608011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.364 "name": "raid_bdev1", 00:08:03.364 "aliases": [ 00:08:03.364 "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719" 00:08:03.364 ], 00:08:03.364 "product_name": "Raid Volume", 00:08:03.364 "block_size": 512, 00:08:03.364 "num_blocks": 126976, 00:08:03.364 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:03.364 "assigned_rate_limits": { 00:08:03.364 "rw_ios_per_sec": 0, 00:08:03.364 "rw_mbytes_per_sec": 0, 00:08:03.364 "r_mbytes_per_sec": 0, 00:08:03.364 "w_mbytes_per_sec": 0 00:08:03.364 }, 00:08:03.364 "claimed": false, 00:08:03.364 "zoned": false, 00:08:03.364 "supported_io_types": { 00:08:03.364 "read": true, 00:08:03.364 "write": true, 00:08:03.364 "unmap": true, 00:08:03.364 "flush": true, 00:08:03.364 "reset": true, 00:08:03.364 "nvme_admin": false, 00:08:03.364 "nvme_io": false, 00:08:03.364 "nvme_io_md": false, 00:08:03.364 "write_zeroes": true, 00:08:03.364 "zcopy": false, 00:08:03.364 "get_zone_info": false, 00:08:03.364 "zone_management": false, 00:08:03.364 "zone_append": false, 00:08:03.364 "compare": false, 00:08:03.364 "compare_and_write": false, 00:08:03.364 "abort": false, 00:08:03.364 "seek_hole": false, 00:08:03.364 "seek_data": false, 00:08:03.364 "copy": false, 00:08:03.364 "nvme_iov_md": false 00:08:03.364 }, 00:08:03.364 "memory_domains": [ 00:08:03.364 { 00:08:03.364 "dma_device_id": "system", 00:08:03.364 "dma_device_type": 1 00:08:03.364 }, 00:08:03.364 { 00:08:03.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.364 "dma_device_type": 2 00:08:03.364 }, 00:08:03.364 { 00:08:03.364 "dma_device_id": "system", 00:08:03.364 "dma_device_type": 1 00:08:03.364 }, 00:08:03.364 { 00:08:03.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.364 "dma_device_type": 2 00:08:03.364 } 00:08:03.364 ], 00:08:03.364 "driver_specific": { 00:08:03.364 "raid": { 00:08:03.364 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:03.364 "strip_size_kb": 64, 00:08:03.364 "state": "online", 00:08:03.364 "raid_level": "raid0", 00:08:03.364 "superblock": true, 00:08:03.364 "num_base_bdevs": 2, 00:08:03.364 "num_base_bdevs_discovered": 2, 00:08:03.364 "num_base_bdevs_operational": 2, 00:08:03.364 "base_bdevs_list": [ 00:08:03.364 { 00:08:03.364 "name": "pt1", 00:08:03.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.364 "is_configured": true, 00:08:03.364 "data_offset": 2048, 00:08:03.364 "data_size": 63488 00:08:03.364 }, 00:08:03.364 { 00:08:03.364 "name": "pt2", 00:08:03.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.364 "is_configured": true, 00:08:03.364 "data_offset": 2048, 00:08:03.364 "data_size": 63488 00:08:03.364 } 00:08:03.364 ] 00:08:03.364 } 00:08:03.364 } 00:08:03.364 }' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.364 pt2' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.364 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.365 [2024-11-04 11:40:28.851645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.365 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719 ']' 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 [2024-11-04 11:40:28.895192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.629 [2024-11-04 11:40:28.895267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.629 [2024-11-04 11:40:28.895377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.629 [2024-11-04 11:40:28.895455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.629 [2024-11-04 11:40:28.895472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:03.629 11:40:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.629 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 [2024-11-04 11:40:29.023014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:03.629 [2024-11-04 11:40:29.025097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:03.629 [2024-11-04 11:40:29.025228] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:03.629 [2024-11-04 11:40:29.025347] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:03.629 [2024-11-04 11:40:29.025421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.629 [2024-11-04 11:40:29.025473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:03.629 request: 00:08:03.629 { 00:08:03.629 "name": "raid_bdev1", 00:08:03.629 "raid_level": "raid0", 00:08:03.629 "base_bdevs": [ 00:08:03.629 "malloc1", 00:08:03.629 "malloc2" 00:08:03.629 ], 00:08:03.629 "strip_size_kb": 64, 00:08:03.630 "superblock": false, 00:08:03.630 "method": "bdev_raid_create", 00:08:03.630 "req_id": 1 00:08:03.630 } 00:08:03.630 Got JSON-RPC error response 00:08:03.630 response: 00:08:03.630 { 00:08:03.630 "code": -17, 00:08:03.630 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:03.630 } 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 [2024-11-04 11:40:29.082851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:03.630 [2024-11-04 11:40:29.082956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.630 [2024-11-04 11:40:29.082980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:03.630 [2024-11-04 11:40:29.082991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.630 [2024-11-04 11:40:29.085474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.630 [2024-11-04 11:40:29.085509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:03.630 [2024-11-04 11:40:29.085609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:03.630 [2024-11-04 11:40:29.085675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:03.630 pt1 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.630 "name": "raid_bdev1", 00:08:03.630 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:03.630 "strip_size_kb": 64, 00:08:03.630 "state": "configuring", 00:08:03.630 "raid_level": "raid0", 00:08:03.630 "superblock": true, 00:08:03.630 "num_base_bdevs": 2, 00:08:03.630 "num_base_bdevs_discovered": 1, 00:08:03.630 "num_base_bdevs_operational": 2, 00:08:03.630 "base_bdevs_list": [ 00:08:03.630 { 00:08:03.630 "name": "pt1", 00:08:03.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.630 "is_configured": true, 00:08:03.630 "data_offset": 2048, 00:08:03.630 "data_size": 63488 00:08:03.630 }, 00:08:03.630 { 00:08:03.630 "name": null, 00:08:03.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.630 "is_configured": false, 00:08:03.630 "data_offset": 2048, 00:08:03.630 "data_size": 63488 00:08:03.630 } 00:08:03.630 ] 00:08:03.630 }' 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.630 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.211 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.211 [2024-11-04 11:40:29.526148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.211 [2024-11-04 11:40:29.526296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.211 [2024-11-04 11:40:29.526341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:04.211 [2024-11-04 11:40:29.526378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.211 [2024-11-04 11:40:29.526968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.211 [2024-11-04 11:40:29.527038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.211 [2024-11-04 11:40:29.527172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.211 [2024-11-04 11:40:29.527234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.211 [2024-11-04 11:40:29.527427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.211 [2024-11-04 11:40:29.527475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.212 [2024-11-04 11:40:29.527775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:04.212 [2024-11-04 11:40:29.528000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.212 [2024-11-04 11:40:29.528047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.212 [2024-11-04 11:40:29.528277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.212 pt2 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.212 "name": "raid_bdev1", 00:08:04.212 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:04.212 "strip_size_kb": 64, 00:08:04.212 "state": "online", 00:08:04.212 "raid_level": "raid0", 00:08:04.212 "superblock": true, 00:08:04.212 "num_base_bdevs": 2, 00:08:04.212 "num_base_bdevs_discovered": 2, 00:08:04.212 "num_base_bdevs_operational": 2, 00:08:04.212 "base_bdevs_list": [ 00:08:04.212 { 00:08:04.212 "name": "pt1", 00:08:04.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.212 "is_configured": true, 00:08:04.212 "data_offset": 2048, 00:08:04.212 "data_size": 63488 00:08:04.212 }, 00:08:04.212 { 00:08:04.212 "name": "pt2", 00:08:04.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.212 "is_configured": true, 00:08:04.212 "data_offset": 2048, 00:08:04.212 "data_size": 63488 00:08:04.212 } 00:08:04.212 ] 00:08:04.212 }' 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.212 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.471 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:04.471 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.472 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.472 [2024-11-04 11:40:29.977676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.731 11:40:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.731 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.731 "name": "raid_bdev1", 00:08:04.731 "aliases": [ 00:08:04.731 "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719" 00:08:04.731 ], 00:08:04.731 "product_name": "Raid Volume", 00:08:04.731 "block_size": 512, 00:08:04.731 "num_blocks": 126976, 00:08:04.731 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:04.731 "assigned_rate_limits": { 00:08:04.731 "rw_ios_per_sec": 0, 00:08:04.731 "rw_mbytes_per_sec": 0, 00:08:04.731 "r_mbytes_per_sec": 0, 00:08:04.731 "w_mbytes_per_sec": 0 00:08:04.731 }, 00:08:04.731 "claimed": false, 00:08:04.731 "zoned": false, 00:08:04.731 "supported_io_types": { 00:08:04.731 "read": true, 00:08:04.731 "write": true, 00:08:04.731 "unmap": true, 00:08:04.731 "flush": true, 00:08:04.731 "reset": true, 00:08:04.731 "nvme_admin": false, 00:08:04.731 "nvme_io": false, 00:08:04.731 "nvme_io_md": false, 00:08:04.731 "write_zeroes": true, 00:08:04.731 "zcopy": false, 00:08:04.731 "get_zone_info": false, 00:08:04.731 "zone_management": false, 00:08:04.731 "zone_append": false, 00:08:04.731 "compare": false, 00:08:04.731 "compare_and_write": false, 00:08:04.731 "abort": false, 00:08:04.731 "seek_hole": false, 00:08:04.731 "seek_data": false, 00:08:04.731 "copy": false, 00:08:04.731 "nvme_iov_md": false 00:08:04.731 }, 00:08:04.731 "memory_domains": [ 00:08:04.731 { 00:08:04.731 "dma_device_id": "system", 00:08:04.731 "dma_device_type": 1 00:08:04.731 }, 00:08:04.731 { 00:08:04.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.731 "dma_device_type": 2 00:08:04.731 }, 00:08:04.731 { 00:08:04.731 "dma_device_id": "system", 00:08:04.731 "dma_device_type": 1 00:08:04.731 }, 00:08:04.731 { 00:08:04.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.731 "dma_device_type": 2 00:08:04.731 } 00:08:04.731 ], 00:08:04.731 "driver_specific": { 00:08:04.731 "raid": { 00:08:04.731 "uuid": "e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719", 00:08:04.731 "strip_size_kb": 64, 00:08:04.731 "state": "online", 00:08:04.731 "raid_level": "raid0", 00:08:04.731 "superblock": true, 00:08:04.731 "num_base_bdevs": 2, 00:08:04.731 "num_base_bdevs_discovered": 2, 00:08:04.731 "num_base_bdevs_operational": 2, 00:08:04.731 "base_bdevs_list": [ 00:08:04.731 { 00:08:04.731 "name": "pt1", 00:08:04.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.731 "is_configured": true, 00:08:04.731 "data_offset": 2048, 00:08:04.731 "data_size": 63488 00:08:04.731 }, 00:08:04.731 { 00:08:04.731 "name": "pt2", 00:08:04.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.731 "is_configured": true, 00:08:04.731 "data_offset": 2048, 00:08:04.732 "data_size": 63488 00:08:04.732 } 00:08:04.732 ] 00:08:04.732 } 00:08:04.732 } 00:08:04.732 }' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.732 pt2' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.732 [2024-11-04 11:40:30.209244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719 '!=' e98b3e82-1f72-4a2a-b6ca-0c0cb25a1719 ']' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61410 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61410 ']' 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61410 00:08:04.732 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61410 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.992 killing process with pid 61410 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61410' 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61410 00:08:04.992 [2024-11-04 11:40:30.287698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.992 [2024-11-04 11:40:30.287801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.992 [2024-11-04 11:40:30.287856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.992 [2024-11-04 11:40:30.287869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.992 11:40:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61410 00:08:05.251 [2024-11-04 11:40:30.518086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.191 11:40:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:06.191 00:08:06.191 real 0m4.651s 00:08:06.191 user 0m6.527s 00:08:06.191 sys 0m0.769s 00:08:06.191 ************************************ 00:08:06.191 END TEST raid_superblock_test 00:08:06.191 ************************************ 00:08:06.191 11:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.191 11:40:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.450 11:40:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:06.450 11:40:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:06.450 11:40:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.450 11:40:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.450 ************************************ 00:08:06.450 START TEST raid_read_error_test 00:08:06.450 ************************************ 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QLVUfL02SR 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61616 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61616 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61616 ']' 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.450 11:40:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.450 [2024-11-04 11:40:31.851200] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:06.450 [2024-11-04 11:40:31.851409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61616 ] 00:08:06.710 [2024-11-04 11:40:32.028790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.710 [2024-11-04 11:40:32.150344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.968 [2024-11-04 11:40:32.358418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.968 [2024-11-04 11:40:32.358555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.301 BaseBdev1_malloc 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.301 true 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.301 [2024-11-04 11:40:32.771225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.301 [2024-11-04 11:40:32.771380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.301 [2024-11-04 11:40:32.771439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.301 [2024-11-04 11:40:32.771454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.301 [2024-11-04 11:40:32.773953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.301 [2024-11-04 11:40:32.773994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.301 BaseBdev1 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.301 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 BaseBdev2_malloc 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 true 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 [2024-11-04 11:40:32.827344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.573 [2024-11-04 11:40:32.827418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.573 [2024-11-04 11:40:32.827438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.573 [2024-11-04 11:40:32.827449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.573 [2024-11-04 11:40:32.829792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.573 [2024-11-04 11:40:32.829832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.573 BaseBdev2 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 [2024-11-04 11:40:32.835401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.573 [2024-11-04 11:40:32.837469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.573 [2024-11-04 11:40:32.837662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.573 [2024-11-04 11:40:32.837680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.573 [2024-11-04 11:40:32.837935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.573 [2024-11-04 11:40:32.838140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.573 [2024-11-04 11:40:32.838153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.573 [2024-11-04 11:40:32.838319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.573 "name": "raid_bdev1", 00:08:07.573 "uuid": "8bcf93c5-a3d0-4a2e-b8a6-cbf7524522b1", 00:08:07.573 "strip_size_kb": 64, 00:08:07.573 "state": "online", 00:08:07.573 "raid_level": "raid0", 00:08:07.573 "superblock": true, 00:08:07.573 "num_base_bdevs": 2, 00:08:07.573 "num_base_bdevs_discovered": 2, 00:08:07.573 "num_base_bdevs_operational": 2, 00:08:07.573 "base_bdevs_list": [ 00:08:07.573 { 00:08:07.573 "name": "BaseBdev1", 00:08:07.573 "uuid": "b1d87f37-db6c-56f4-96a0-2a493deddb6e", 00:08:07.573 "is_configured": true, 00:08:07.573 "data_offset": 2048, 00:08:07.573 "data_size": 63488 00:08:07.573 }, 00:08:07.573 { 00:08:07.573 "name": "BaseBdev2", 00:08:07.573 "uuid": "75e273fc-0b12-52bd-8599-c4f2deb87454", 00:08:07.573 "is_configured": true, 00:08:07.573 "data_offset": 2048, 00:08:07.573 "data_size": 63488 00:08:07.573 } 00:08:07.573 ] 00:08:07.573 }' 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.573 11:40:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.831 11:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.831 11:40:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:08.089 [2024-11-04 11:40:33.375902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.025 "name": "raid_bdev1", 00:08:09.025 "uuid": "8bcf93c5-a3d0-4a2e-b8a6-cbf7524522b1", 00:08:09.025 "strip_size_kb": 64, 00:08:09.025 "state": "online", 00:08:09.025 "raid_level": "raid0", 00:08:09.025 "superblock": true, 00:08:09.025 "num_base_bdevs": 2, 00:08:09.025 "num_base_bdevs_discovered": 2, 00:08:09.025 "num_base_bdevs_operational": 2, 00:08:09.025 "base_bdevs_list": [ 00:08:09.025 { 00:08:09.025 "name": "BaseBdev1", 00:08:09.025 "uuid": "b1d87f37-db6c-56f4-96a0-2a493deddb6e", 00:08:09.025 "is_configured": true, 00:08:09.025 "data_offset": 2048, 00:08:09.025 "data_size": 63488 00:08:09.025 }, 00:08:09.025 { 00:08:09.025 "name": "BaseBdev2", 00:08:09.025 "uuid": "75e273fc-0b12-52bd-8599-c4f2deb87454", 00:08:09.025 "is_configured": true, 00:08:09.025 "data_offset": 2048, 00:08:09.025 "data_size": 63488 00:08:09.025 } 00:08:09.025 ] 00:08:09.025 }' 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.025 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.284 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.284 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.284 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.284 [2024-11-04 11:40:34.739749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.284 [2024-11-04 11:40:34.739785] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.284 [2024-11-04 11:40:34.742996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.284 [2024-11-04 11:40:34.743044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.284 [2024-11-04 11:40:34.743082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.284 [2024-11-04 11:40:34.743095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.284 { 00:08:09.284 "results": [ 00:08:09.284 { 00:08:09.284 "job": "raid_bdev1", 00:08:09.284 "core_mask": "0x1", 00:08:09.284 "workload": "randrw", 00:08:09.284 "percentage": 50, 00:08:09.284 "status": "finished", 00:08:09.284 "queue_depth": 1, 00:08:09.284 "io_size": 131072, 00:08:09.284 "runtime": 1.364695, 00:08:09.284 "iops": 15104.473893434064, 00:08:09.285 "mibps": 1888.059236679258, 00:08:09.285 "io_failed": 1, 00:08:09.285 "io_timeout": 0, 00:08:09.285 "avg_latency_us": 91.9302391260783, 00:08:09.285 "min_latency_us": 26.717903930131005, 00:08:09.285 "max_latency_us": 1488.1537117903931 00:08:09.285 } 00:08:09.285 ], 00:08:09.285 "core_count": 1 00:08:09.285 } 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61616 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61616 ']' 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61616 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61616 00:08:09.285 killing process with pid 61616 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61616' 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61616 00:08:09.285 11:40:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61616 00:08:09.285 [2024-11-04 11:40:34.769777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.544 [2024-11-04 11:40:34.905237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QLVUfL02SR 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:10.924 ************************************ 00:08:10.924 END TEST raid_read_error_test 00:08:10.924 ************************************ 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:10.924 00:08:10.924 real 0m4.348s 00:08:10.924 user 0m5.270s 00:08:10.924 sys 0m0.482s 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.924 11:40:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.924 11:40:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:10.924 11:40:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:10.924 11:40:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.924 11:40:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.924 ************************************ 00:08:10.924 START TEST raid_write_error_test 00:08:10.924 ************************************ 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ErMkoU6i1U 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61762 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61762 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61762 ']' 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:10.924 11:40:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.924 [2024-11-04 11:40:36.276501] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:10.924 [2024-11-04 11:40:36.276713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61762 ] 00:08:11.183 [2024-11-04 11:40:36.463170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.183 [2024-11-04 11:40:36.581507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.442 [2024-11-04 11:40:36.790571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.442 [2024-11-04 11:40:36.790696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 BaseBdev1_malloc 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 true 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 [2024-11-04 11:40:37.195866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.701 [2024-11-04 11:40:37.195944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.701 [2024-11-04 11:40:37.195970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.701 [2024-11-04 11:40:37.195982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.701 [2024-11-04 11:40:37.198572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.701 [2024-11-04 11:40:37.198619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.701 BaseBdev1 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.701 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.961 BaseBdev2_malloc 00:08:11.961 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 true 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 [2024-11-04 11:40:37.265682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.962 [2024-11-04 11:40:37.265741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.962 [2024-11-04 11:40:37.265760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.962 [2024-11-04 11:40:37.265771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.962 [2024-11-04 11:40:37.267993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.962 [2024-11-04 11:40:37.268036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.962 BaseBdev2 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 [2024-11-04 11:40:37.277752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.962 [2024-11-04 11:40:37.279653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.962 [2024-11-04 11:40:37.279852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.962 [2024-11-04 11:40:37.279870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.962 [2024-11-04 11:40:37.280126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:11.962 [2024-11-04 11:40:37.280320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.962 [2024-11-04 11:40:37.280335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:11.962 [2024-11-04 11:40:37.280606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.962 "name": "raid_bdev1", 00:08:11.962 "uuid": "0b3dddf5-dd8f-4669-b953-82f36d8e3392", 00:08:11.962 "strip_size_kb": 64, 00:08:11.962 "state": "online", 00:08:11.962 "raid_level": "raid0", 00:08:11.962 "superblock": true, 00:08:11.962 "num_base_bdevs": 2, 00:08:11.962 "num_base_bdevs_discovered": 2, 00:08:11.962 "num_base_bdevs_operational": 2, 00:08:11.962 "base_bdevs_list": [ 00:08:11.962 { 00:08:11.962 "name": "BaseBdev1", 00:08:11.962 "uuid": "7cd6afcb-4b1f-55b2-a39d-112be9ab8d17", 00:08:11.962 "is_configured": true, 00:08:11.962 "data_offset": 2048, 00:08:11.962 "data_size": 63488 00:08:11.962 }, 00:08:11.962 { 00:08:11.962 "name": "BaseBdev2", 00:08:11.962 "uuid": "5fa19fdd-ba03-5a9c-9760-7956011d5dc6", 00:08:11.962 "is_configured": true, 00:08:11.962 "data_offset": 2048, 00:08:11.962 "data_size": 63488 00:08:11.962 } 00:08:11.962 ] 00:08:11.962 }' 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.962 11:40:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.222 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.222 11:40:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.481 [2024-11-04 11:40:37.830224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.417 "name": "raid_bdev1", 00:08:13.417 "uuid": "0b3dddf5-dd8f-4669-b953-82f36d8e3392", 00:08:13.417 "strip_size_kb": 64, 00:08:13.417 "state": "online", 00:08:13.417 "raid_level": "raid0", 00:08:13.417 "superblock": true, 00:08:13.417 "num_base_bdevs": 2, 00:08:13.417 "num_base_bdevs_discovered": 2, 00:08:13.417 "num_base_bdevs_operational": 2, 00:08:13.417 "base_bdevs_list": [ 00:08:13.417 { 00:08:13.417 "name": "BaseBdev1", 00:08:13.417 "uuid": "7cd6afcb-4b1f-55b2-a39d-112be9ab8d17", 00:08:13.417 "is_configured": true, 00:08:13.417 "data_offset": 2048, 00:08:13.417 "data_size": 63488 00:08:13.417 }, 00:08:13.417 { 00:08:13.417 "name": "BaseBdev2", 00:08:13.417 "uuid": "5fa19fdd-ba03-5a9c-9760-7956011d5dc6", 00:08:13.417 "is_configured": true, 00:08:13.417 "data_offset": 2048, 00:08:13.417 "data_size": 63488 00:08:13.417 } 00:08:13.417 ] 00:08:13.417 }' 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.417 11:40:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.985 [2024-11-04 11:40:39.218961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.985 [2024-11-04 11:40:39.219004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.985 [2024-11-04 11:40:39.222212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.985 [2024-11-04 11:40:39.222271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.985 [2024-11-04 11:40:39.222337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.985 [2024-11-04 11:40:39.222362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.985 { 00:08:13.985 "results": [ 00:08:13.985 { 00:08:13.985 "job": "raid_bdev1", 00:08:13.985 "core_mask": "0x1", 00:08:13.985 "workload": "randrw", 00:08:13.985 "percentage": 50, 00:08:13.985 "status": "finished", 00:08:13.985 "queue_depth": 1, 00:08:13.985 "io_size": 131072, 00:08:13.985 "runtime": 1.389628, 00:08:13.985 "iops": 15061.584827018454, 00:08:13.985 "mibps": 1882.6981033773068, 00:08:13.985 "io_failed": 1, 00:08:13.985 "io_timeout": 0, 00:08:13.985 "avg_latency_us": 92.20316569372562, 00:08:13.985 "min_latency_us": 26.1589519650655, 00:08:13.985 "max_latency_us": 1609.7816593886462 00:08:13.985 } 00:08:13.985 ], 00:08:13.985 "core_count": 1 00:08:13.985 } 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61762 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61762 ']' 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61762 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61762 00:08:13.985 killing process with pid 61762 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61762' 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61762 00:08:13.985 11:40:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61762 00:08:13.985 [2024-11-04 11:40:39.254247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.985 [2024-11-04 11:40:39.402551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.364 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ErMkoU6i1U 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.365 ************************************ 00:08:15.365 END TEST raid_write_error_test 00:08:15.365 ************************************ 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:15.365 00:08:15.365 real 0m4.414s 00:08:15.365 user 0m5.357s 00:08:15.365 sys 0m0.508s 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.365 11:40:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.365 11:40:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:15.365 11:40:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:15.365 11:40:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:15.365 11:40:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.365 11:40:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.365 ************************************ 00:08:15.365 START TEST raid_state_function_test 00:08:15.365 ************************************ 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61905 00:08:15.365 Process raid pid: 61905 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61905' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61905 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61905 ']' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.365 11:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.365 [2024-11-04 11:40:40.716190] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:15.365 [2024-11-04 11:40:40.716333] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.365 [2024-11-04 11:40:40.870615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.624 [2024-11-04 11:40:40.990222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.883 [2024-11-04 11:40:41.201780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.883 [2024-11-04 11:40:41.201835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.142 [2024-11-04 11:40:41.577057] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.142 [2024-11-04 11:40:41.577111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.142 [2024-11-04 11:40:41.577122] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.142 [2024-11-04 11:40:41.577131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.142 "name": "Existed_Raid", 00:08:16.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.142 "strip_size_kb": 64, 00:08:16.142 "state": "configuring", 00:08:16.142 "raid_level": "concat", 00:08:16.142 "superblock": false, 00:08:16.142 "num_base_bdevs": 2, 00:08:16.142 "num_base_bdevs_discovered": 0, 00:08:16.142 "num_base_bdevs_operational": 2, 00:08:16.142 "base_bdevs_list": [ 00:08:16.142 { 00:08:16.142 "name": "BaseBdev1", 00:08:16.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.142 "is_configured": false, 00:08:16.142 "data_offset": 0, 00:08:16.142 "data_size": 0 00:08:16.142 }, 00:08:16.142 { 00:08:16.142 "name": "BaseBdev2", 00:08:16.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.142 "is_configured": false, 00:08:16.142 "data_offset": 0, 00:08:16.142 "data_size": 0 00:08:16.142 } 00:08:16.142 ] 00:08:16.142 }' 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.142 11:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.739 [2024-11-04 11:40:42.044279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.739 [2024-11-04 11:40:42.044322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.739 [2024-11-04 11:40:42.052266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.739 [2024-11-04 11:40:42.052319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.739 [2024-11-04 11:40:42.052330] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.739 [2024-11-04 11:40:42.052344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.739 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.739 [2024-11-04 11:40:42.098792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.739 BaseBdev1 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.740 [ 00:08:16.740 { 00:08:16.740 "name": "BaseBdev1", 00:08:16.740 "aliases": [ 00:08:16.740 "c7f738db-402d-46e8-9b94-c165b576097a" 00:08:16.740 ], 00:08:16.740 "product_name": "Malloc disk", 00:08:16.740 "block_size": 512, 00:08:16.740 "num_blocks": 65536, 00:08:16.740 "uuid": "c7f738db-402d-46e8-9b94-c165b576097a", 00:08:16.740 "assigned_rate_limits": { 00:08:16.740 "rw_ios_per_sec": 0, 00:08:16.740 "rw_mbytes_per_sec": 0, 00:08:16.740 "r_mbytes_per_sec": 0, 00:08:16.740 "w_mbytes_per_sec": 0 00:08:16.740 }, 00:08:16.740 "claimed": true, 00:08:16.740 "claim_type": "exclusive_write", 00:08:16.740 "zoned": false, 00:08:16.740 "supported_io_types": { 00:08:16.740 "read": true, 00:08:16.740 "write": true, 00:08:16.740 "unmap": true, 00:08:16.740 "flush": true, 00:08:16.740 "reset": true, 00:08:16.740 "nvme_admin": false, 00:08:16.740 "nvme_io": false, 00:08:16.740 "nvme_io_md": false, 00:08:16.740 "write_zeroes": true, 00:08:16.740 "zcopy": true, 00:08:16.740 "get_zone_info": false, 00:08:16.740 "zone_management": false, 00:08:16.740 "zone_append": false, 00:08:16.740 "compare": false, 00:08:16.740 "compare_and_write": false, 00:08:16.740 "abort": true, 00:08:16.740 "seek_hole": false, 00:08:16.740 "seek_data": false, 00:08:16.740 "copy": true, 00:08:16.740 "nvme_iov_md": false 00:08:16.740 }, 00:08:16.740 "memory_domains": [ 00:08:16.740 { 00:08:16.740 "dma_device_id": "system", 00:08:16.740 "dma_device_type": 1 00:08:16.740 }, 00:08:16.740 { 00:08:16.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.740 "dma_device_type": 2 00:08:16.740 } 00:08:16.740 ], 00:08:16.740 "driver_specific": {} 00:08:16.740 } 00:08:16.740 ] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.740 "name": "Existed_Raid", 00:08:16.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.740 "strip_size_kb": 64, 00:08:16.740 "state": "configuring", 00:08:16.740 "raid_level": "concat", 00:08:16.740 "superblock": false, 00:08:16.740 "num_base_bdevs": 2, 00:08:16.740 "num_base_bdevs_discovered": 1, 00:08:16.740 "num_base_bdevs_operational": 2, 00:08:16.740 "base_bdevs_list": [ 00:08:16.740 { 00:08:16.740 "name": "BaseBdev1", 00:08:16.740 "uuid": "c7f738db-402d-46e8-9b94-c165b576097a", 00:08:16.740 "is_configured": true, 00:08:16.740 "data_offset": 0, 00:08:16.740 "data_size": 65536 00:08:16.740 }, 00:08:16.740 { 00:08:16.740 "name": "BaseBdev2", 00:08:16.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.740 "is_configured": false, 00:08:16.740 "data_offset": 0, 00:08:16.740 "data_size": 0 00:08:16.740 } 00:08:16.740 ] 00:08:16.740 }' 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.740 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.308 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.308 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 [2024-11-04 11:40:42.558070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.308 [2024-11-04 11:40:42.558129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.308 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.309 [2024-11-04 11:40:42.566122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.309 [2024-11-04 11:40:42.567999] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.309 [2024-11-04 11:40:42.568046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.309 "name": "Existed_Raid", 00:08:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.309 "strip_size_kb": 64, 00:08:17.309 "state": "configuring", 00:08:17.309 "raid_level": "concat", 00:08:17.309 "superblock": false, 00:08:17.309 "num_base_bdevs": 2, 00:08:17.309 "num_base_bdevs_discovered": 1, 00:08:17.309 "num_base_bdevs_operational": 2, 00:08:17.309 "base_bdevs_list": [ 00:08:17.309 { 00:08:17.309 "name": "BaseBdev1", 00:08:17.309 "uuid": "c7f738db-402d-46e8-9b94-c165b576097a", 00:08:17.309 "is_configured": true, 00:08:17.309 "data_offset": 0, 00:08:17.309 "data_size": 65536 00:08:17.309 }, 00:08:17.309 { 00:08:17.309 "name": "BaseBdev2", 00:08:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.309 "is_configured": false, 00:08:17.309 "data_offset": 0, 00:08:17.309 "data_size": 0 00:08:17.309 } 00:08:17.309 ] 00:08:17.309 }' 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.309 11:40:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.568 [2024-11-04 11:40:43.072136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.568 [2024-11-04 11:40:43.072196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.568 [2024-11-04 11:40:43.072204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:17.568 [2024-11-04 11:40:43.072505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.568 [2024-11-04 11:40:43.072733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.568 [2024-11-04 11:40:43.072757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.568 [2024-11-04 11:40:43.073046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.568 BaseBdev2 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:17.568 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.569 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.829 [ 00:08:17.829 { 00:08:17.829 "name": "BaseBdev2", 00:08:17.829 "aliases": [ 00:08:17.829 "846a84bb-4491-45c9-b9b3-ddc88404ead8" 00:08:17.829 ], 00:08:17.829 "product_name": "Malloc disk", 00:08:17.829 "block_size": 512, 00:08:17.829 "num_blocks": 65536, 00:08:17.829 "uuid": "846a84bb-4491-45c9-b9b3-ddc88404ead8", 00:08:17.829 "assigned_rate_limits": { 00:08:17.829 "rw_ios_per_sec": 0, 00:08:17.829 "rw_mbytes_per_sec": 0, 00:08:17.829 "r_mbytes_per_sec": 0, 00:08:17.829 "w_mbytes_per_sec": 0 00:08:17.829 }, 00:08:17.829 "claimed": true, 00:08:17.829 "claim_type": "exclusive_write", 00:08:17.829 "zoned": false, 00:08:17.829 "supported_io_types": { 00:08:17.829 "read": true, 00:08:17.829 "write": true, 00:08:17.829 "unmap": true, 00:08:17.829 "flush": true, 00:08:17.829 "reset": true, 00:08:17.829 "nvme_admin": false, 00:08:17.829 "nvme_io": false, 00:08:17.829 "nvme_io_md": false, 00:08:17.829 "write_zeroes": true, 00:08:17.829 "zcopy": true, 00:08:17.829 "get_zone_info": false, 00:08:17.829 "zone_management": false, 00:08:17.829 "zone_append": false, 00:08:17.829 "compare": false, 00:08:17.829 "compare_and_write": false, 00:08:17.829 "abort": true, 00:08:17.829 "seek_hole": false, 00:08:17.829 "seek_data": false, 00:08:17.829 "copy": true, 00:08:17.829 "nvme_iov_md": false 00:08:17.829 }, 00:08:17.829 "memory_domains": [ 00:08:17.829 { 00:08:17.829 "dma_device_id": "system", 00:08:17.829 "dma_device_type": 1 00:08:17.829 }, 00:08:17.829 { 00:08:17.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.829 "dma_device_type": 2 00:08:17.829 } 00:08:17.829 ], 00:08:17.829 "driver_specific": {} 00:08:17.829 } 00:08:17.829 ] 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.829 "name": "Existed_Raid", 00:08:17.829 "uuid": "ba2246d6-9c07-4d2d-abfc-52f2a0d16c00", 00:08:17.829 "strip_size_kb": 64, 00:08:17.829 "state": "online", 00:08:17.829 "raid_level": "concat", 00:08:17.829 "superblock": false, 00:08:17.829 "num_base_bdevs": 2, 00:08:17.829 "num_base_bdevs_discovered": 2, 00:08:17.829 "num_base_bdevs_operational": 2, 00:08:17.829 "base_bdevs_list": [ 00:08:17.829 { 00:08:17.829 "name": "BaseBdev1", 00:08:17.829 "uuid": "c7f738db-402d-46e8-9b94-c165b576097a", 00:08:17.829 "is_configured": true, 00:08:17.829 "data_offset": 0, 00:08:17.829 "data_size": 65536 00:08:17.829 }, 00:08:17.829 { 00:08:17.829 "name": "BaseBdev2", 00:08:17.829 "uuid": "846a84bb-4491-45c9-b9b3-ddc88404ead8", 00:08:17.829 "is_configured": true, 00:08:17.829 "data_offset": 0, 00:08:17.829 "data_size": 65536 00:08:17.829 } 00:08:17.829 ] 00:08:17.829 }' 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.829 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.089 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.089 [2024-11-04 11:40:43.499892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.090 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.090 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.090 "name": "Existed_Raid", 00:08:18.090 "aliases": [ 00:08:18.090 "ba2246d6-9c07-4d2d-abfc-52f2a0d16c00" 00:08:18.090 ], 00:08:18.090 "product_name": "Raid Volume", 00:08:18.090 "block_size": 512, 00:08:18.090 "num_blocks": 131072, 00:08:18.090 "uuid": "ba2246d6-9c07-4d2d-abfc-52f2a0d16c00", 00:08:18.090 "assigned_rate_limits": { 00:08:18.090 "rw_ios_per_sec": 0, 00:08:18.090 "rw_mbytes_per_sec": 0, 00:08:18.090 "r_mbytes_per_sec": 0, 00:08:18.090 "w_mbytes_per_sec": 0 00:08:18.090 }, 00:08:18.090 "claimed": false, 00:08:18.090 "zoned": false, 00:08:18.090 "supported_io_types": { 00:08:18.090 "read": true, 00:08:18.090 "write": true, 00:08:18.090 "unmap": true, 00:08:18.090 "flush": true, 00:08:18.090 "reset": true, 00:08:18.090 "nvme_admin": false, 00:08:18.090 "nvme_io": false, 00:08:18.090 "nvme_io_md": false, 00:08:18.090 "write_zeroes": true, 00:08:18.090 "zcopy": false, 00:08:18.090 "get_zone_info": false, 00:08:18.090 "zone_management": false, 00:08:18.090 "zone_append": false, 00:08:18.090 "compare": false, 00:08:18.090 "compare_and_write": false, 00:08:18.090 "abort": false, 00:08:18.090 "seek_hole": false, 00:08:18.090 "seek_data": false, 00:08:18.090 "copy": false, 00:08:18.090 "nvme_iov_md": false 00:08:18.090 }, 00:08:18.090 "memory_domains": [ 00:08:18.090 { 00:08:18.090 "dma_device_id": "system", 00:08:18.090 "dma_device_type": 1 00:08:18.090 }, 00:08:18.090 { 00:08:18.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.090 "dma_device_type": 2 00:08:18.090 }, 00:08:18.090 { 00:08:18.090 "dma_device_id": "system", 00:08:18.090 "dma_device_type": 1 00:08:18.090 }, 00:08:18.090 { 00:08:18.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.090 "dma_device_type": 2 00:08:18.090 } 00:08:18.090 ], 00:08:18.090 "driver_specific": { 00:08:18.090 "raid": { 00:08:18.090 "uuid": "ba2246d6-9c07-4d2d-abfc-52f2a0d16c00", 00:08:18.090 "strip_size_kb": 64, 00:08:18.090 "state": "online", 00:08:18.090 "raid_level": "concat", 00:08:18.090 "superblock": false, 00:08:18.090 "num_base_bdevs": 2, 00:08:18.090 "num_base_bdevs_discovered": 2, 00:08:18.090 "num_base_bdevs_operational": 2, 00:08:18.090 "base_bdevs_list": [ 00:08:18.090 { 00:08:18.090 "name": "BaseBdev1", 00:08:18.090 "uuid": "c7f738db-402d-46e8-9b94-c165b576097a", 00:08:18.090 "is_configured": true, 00:08:18.090 "data_offset": 0, 00:08:18.090 "data_size": 65536 00:08:18.090 }, 00:08:18.090 { 00:08:18.090 "name": "BaseBdev2", 00:08:18.090 "uuid": "846a84bb-4491-45c9-b9b3-ddc88404ead8", 00:08:18.090 "is_configured": true, 00:08:18.090 "data_offset": 0, 00:08:18.090 "data_size": 65536 00:08:18.090 } 00:08:18.090 ] 00:08:18.090 } 00:08:18.090 } 00:08:18.090 }' 00:08:18.090 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.090 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.090 BaseBdev2' 00:08:18.090 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 [2024-11-04 11:40:43.683224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.349 [2024-11-04 11:40:43.683264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.349 [2024-11-04 11:40:43.683319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.349 "name": "Existed_Raid", 00:08:18.349 "uuid": "ba2246d6-9c07-4d2d-abfc-52f2a0d16c00", 00:08:18.349 "strip_size_kb": 64, 00:08:18.349 "state": "offline", 00:08:18.349 "raid_level": "concat", 00:08:18.349 "superblock": false, 00:08:18.349 "num_base_bdevs": 2, 00:08:18.349 "num_base_bdevs_discovered": 1, 00:08:18.349 "num_base_bdevs_operational": 1, 00:08:18.349 "base_bdevs_list": [ 00:08:18.349 { 00:08:18.349 "name": null, 00:08:18.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.349 "is_configured": false, 00:08:18.349 "data_offset": 0, 00:08:18.349 "data_size": 65536 00:08:18.349 }, 00:08:18.349 { 00:08:18.349 "name": "BaseBdev2", 00:08:18.349 "uuid": "846a84bb-4491-45c9-b9b3-ddc88404ead8", 00:08:18.349 "is_configured": true, 00:08:18.349 "data_offset": 0, 00:08:18.349 "data_size": 65536 00:08:18.349 } 00:08:18.349 ] 00:08:18.349 }' 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.349 11:40:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 [2024-11-04 11:40:44.284896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.916 [2024-11-04 11:40:44.284962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61905 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61905 ']' 00:08:18.916 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61905 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61905 00:08:19.177 killing process with pid 61905 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61905' 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61905 00:08:19.177 11:40:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61905 00:08:19.177 [2024-11-04 11:40:44.474183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.177 [2024-11-04 11:40:44.492875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.113 11:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.113 00:08:20.113 real 0m4.989s 00:08:20.113 user 0m7.181s 00:08:20.113 sys 0m0.757s 00:08:20.113 11:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.113 11:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.113 ************************************ 00:08:20.113 END TEST raid_state_function_test 00:08:20.113 ************************************ 00:08:20.373 11:40:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:20.373 11:40:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:20.373 11:40:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.373 11:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.373 ************************************ 00:08:20.373 START TEST raid_state_function_test_sb 00:08:20.373 ************************************ 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62153 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62153' 00:08:20.373 Process raid pid: 62153 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62153 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62153 ']' 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.373 11:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.373 [2024-11-04 11:40:45.777876] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:20.373 [2024-11-04 11:40:45.778011] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.633 [2024-11-04 11:40:45.954289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.633 [2024-11-04 11:40:46.075935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.892 [2024-11-04 11:40:46.303101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.892 [2024-11-04 11:40:46.303152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.151 [2024-11-04 11:40:46.650707] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.151 [2024-11-04 11:40:46.650769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.151 [2024-11-04 11:40:46.650781] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.151 [2024-11-04 11:40:46.650792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.151 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.410 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.410 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.410 "name": "Existed_Raid", 00:08:21.410 "uuid": "278b3e49-88e3-47dd-801a-4f812a311612", 00:08:21.410 "strip_size_kb": 64, 00:08:21.410 "state": "configuring", 00:08:21.410 "raid_level": "concat", 00:08:21.410 "superblock": true, 00:08:21.410 "num_base_bdevs": 2, 00:08:21.410 "num_base_bdevs_discovered": 0, 00:08:21.410 "num_base_bdevs_operational": 2, 00:08:21.410 "base_bdevs_list": [ 00:08:21.410 { 00:08:21.410 "name": "BaseBdev1", 00:08:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.410 "is_configured": false, 00:08:21.410 "data_offset": 0, 00:08:21.410 "data_size": 0 00:08:21.410 }, 00:08:21.410 { 00:08:21.410 "name": "BaseBdev2", 00:08:21.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.410 "is_configured": false, 00:08:21.410 "data_offset": 0, 00:08:21.410 "data_size": 0 00:08:21.410 } 00:08:21.410 ] 00:08:21.410 }' 00:08:21.410 11:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.410 11:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 [2024-11-04 11:40:47.097915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.670 [2024-11-04 11:40:47.097966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 [2024-11-04 11:40:47.105903] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.670 [2024-11-04 11:40:47.105976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.670 [2024-11-04 11:40:47.105987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.670 [2024-11-04 11:40:47.106001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 [2024-11-04 11:40:47.149685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.670 BaseBdev1 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.670 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.670 [ 00:08:21.670 { 00:08:21.670 "name": "BaseBdev1", 00:08:21.670 "aliases": [ 00:08:21.670 "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c" 00:08:21.670 ], 00:08:21.670 "product_name": "Malloc disk", 00:08:21.670 "block_size": 512, 00:08:21.670 "num_blocks": 65536, 00:08:21.670 "uuid": "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c", 00:08:21.670 "assigned_rate_limits": { 00:08:21.670 "rw_ios_per_sec": 0, 00:08:21.670 "rw_mbytes_per_sec": 0, 00:08:21.670 "r_mbytes_per_sec": 0, 00:08:21.670 "w_mbytes_per_sec": 0 00:08:21.670 }, 00:08:21.670 "claimed": true, 00:08:21.670 "claim_type": "exclusive_write", 00:08:21.670 "zoned": false, 00:08:21.670 "supported_io_types": { 00:08:21.670 "read": true, 00:08:21.670 "write": true, 00:08:21.670 "unmap": true, 00:08:21.670 "flush": true, 00:08:21.670 "reset": true, 00:08:21.670 "nvme_admin": false, 00:08:21.670 "nvme_io": false, 00:08:21.670 "nvme_io_md": false, 00:08:21.670 "write_zeroes": true, 00:08:21.670 "zcopy": true, 00:08:21.671 "get_zone_info": false, 00:08:21.671 "zone_management": false, 00:08:21.671 "zone_append": false, 00:08:21.671 "compare": false, 00:08:21.671 "compare_and_write": false, 00:08:21.671 "abort": true, 00:08:21.671 "seek_hole": false, 00:08:21.671 "seek_data": false, 00:08:21.671 "copy": true, 00:08:21.671 "nvme_iov_md": false 00:08:21.671 }, 00:08:21.671 "memory_domains": [ 00:08:21.671 { 00:08:21.671 "dma_device_id": "system", 00:08:21.671 "dma_device_type": 1 00:08:21.671 }, 00:08:21.671 { 00:08:21.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.671 "dma_device_type": 2 00:08:21.671 } 00:08:21.671 ], 00:08:21.671 "driver_specific": {} 00:08:21.671 } 00:08:21.671 ] 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.671 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.930 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.930 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.930 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.930 "name": "Existed_Raid", 00:08:21.930 "uuid": "691585ba-5494-4272-99e0-b8d322b59f52", 00:08:21.930 "strip_size_kb": 64, 00:08:21.930 "state": "configuring", 00:08:21.930 "raid_level": "concat", 00:08:21.930 "superblock": true, 00:08:21.930 "num_base_bdevs": 2, 00:08:21.930 "num_base_bdevs_discovered": 1, 00:08:21.930 "num_base_bdevs_operational": 2, 00:08:21.930 "base_bdevs_list": [ 00:08:21.930 { 00:08:21.930 "name": "BaseBdev1", 00:08:21.930 "uuid": "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c", 00:08:21.930 "is_configured": true, 00:08:21.930 "data_offset": 2048, 00:08:21.930 "data_size": 63488 00:08:21.930 }, 00:08:21.930 { 00:08:21.930 "name": "BaseBdev2", 00:08:21.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.930 "is_configured": false, 00:08:21.930 "data_offset": 0, 00:08:21.930 "data_size": 0 00:08:21.930 } 00:08:21.930 ] 00:08:21.930 }' 00:08:21.930 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.930 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.190 [2024-11-04 11:40:47.628978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.190 [2024-11-04 11:40:47.629056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.190 [2024-11-04 11:40:47.641036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.190 [2024-11-04 11:40:47.643072] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.190 [2024-11-04 11:40:47.643126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.190 "name": "Existed_Raid", 00:08:22.190 "uuid": "42641b76-9b68-43a3-89d9-c5d63ed925e3", 00:08:22.190 "strip_size_kb": 64, 00:08:22.190 "state": "configuring", 00:08:22.190 "raid_level": "concat", 00:08:22.190 "superblock": true, 00:08:22.190 "num_base_bdevs": 2, 00:08:22.190 "num_base_bdevs_discovered": 1, 00:08:22.190 "num_base_bdevs_operational": 2, 00:08:22.190 "base_bdevs_list": [ 00:08:22.190 { 00:08:22.190 "name": "BaseBdev1", 00:08:22.190 "uuid": "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c", 00:08:22.190 "is_configured": true, 00:08:22.190 "data_offset": 2048, 00:08:22.190 "data_size": 63488 00:08:22.190 }, 00:08:22.190 { 00:08:22.190 "name": "BaseBdev2", 00:08:22.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.190 "is_configured": false, 00:08:22.190 "data_offset": 0, 00:08:22.190 "data_size": 0 00:08:22.190 } 00:08:22.190 ] 00:08:22.190 }' 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.190 11:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.757 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.758 [2024-11-04 11:40:48.120142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.758 [2024-11-04 11:40:48.120431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.758 [2024-11-04 11:40:48.120472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.758 [2024-11-04 11:40:48.120773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.758 [2024-11-04 11:40:48.120970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.758 [2024-11-04 11:40:48.120993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.758 BaseBdev2 00:08:22.758 [2024-11-04 11:40:48.121151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.758 [ 00:08:22.758 { 00:08:22.758 "name": "BaseBdev2", 00:08:22.758 "aliases": [ 00:08:22.758 "5fd0a1b9-3423-4f85-ae42-fc1d9a40d738" 00:08:22.758 ], 00:08:22.758 "product_name": "Malloc disk", 00:08:22.758 "block_size": 512, 00:08:22.758 "num_blocks": 65536, 00:08:22.758 "uuid": "5fd0a1b9-3423-4f85-ae42-fc1d9a40d738", 00:08:22.758 "assigned_rate_limits": { 00:08:22.758 "rw_ios_per_sec": 0, 00:08:22.758 "rw_mbytes_per_sec": 0, 00:08:22.758 "r_mbytes_per_sec": 0, 00:08:22.758 "w_mbytes_per_sec": 0 00:08:22.758 }, 00:08:22.758 "claimed": true, 00:08:22.758 "claim_type": "exclusive_write", 00:08:22.758 "zoned": false, 00:08:22.758 "supported_io_types": { 00:08:22.758 "read": true, 00:08:22.758 "write": true, 00:08:22.758 "unmap": true, 00:08:22.758 "flush": true, 00:08:22.758 "reset": true, 00:08:22.758 "nvme_admin": false, 00:08:22.758 "nvme_io": false, 00:08:22.758 "nvme_io_md": false, 00:08:22.758 "write_zeroes": true, 00:08:22.758 "zcopy": true, 00:08:22.758 "get_zone_info": false, 00:08:22.758 "zone_management": false, 00:08:22.758 "zone_append": false, 00:08:22.758 "compare": false, 00:08:22.758 "compare_and_write": false, 00:08:22.758 "abort": true, 00:08:22.758 "seek_hole": false, 00:08:22.758 "seek_data": false, 00:08:22.758 "copy": true, 00:08:22.758 "nvme_iov_md": false 00:08:22.758 }, 00:08:22.758 "memory_domains": [ 00:08:22.758 { 00:08:22.758 "dma_device_id": "system", 00:08:22.758 "dma_device_type": 1 00:08:22.758 }, 00:08:22.758 { 00:08:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.758 "dma_device_type": 2 00:08:22.758 } 00:08:22.758 ], 00:08:22.758 "driver_specific": {} 00:08:22.758 } 00:08:22.758 ] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.758 "name": "Existed_Raid", 00:08:22.758 "uuid": "42641b76-9b68-43a3-89d9-c5d63ed925e3", 00:08:22.758 "strip_size_kb": 64, 00:08:22.758 "state": "online", 00:08:22.758 "raid_level": "concat", 00:08:22.758 "superblock": true, 00:08:22.758 "num_base_bdevs": 2, 00:08:22.758 "num_base_bdevs_discovered": 2, 00:08:22.758 "num_base_bdevs_operational": 2, 00:08:22.758 "base_bdevs_list": [ 00:08:22.758 { 00:08:22.758 "name": "BaseBdev1", 00:08:22.758 "uuid": "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c", 00:08:22.758 "is_configured": true, 00:08:22.758 "data_offset": 2048, 00:08:22.758 "data_size": 63488 00:08:22.758 }, 00:08:22.758 { 00:08:22.758 "name": "BaseBdev2", 00:08:22.758 "uuid": "5fd0a1b9-3423-4f85-ae42-fc1d9a40d738", 00:08:22.758 "is_configured": true, 00:08:22.758 "data_offset": 2048, 00:08:22.758 "data_size": 63488 00:08:22.758 } 00:08:22.758 ] 00:08:22.758 }' 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.758 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 [2024-11-04 11:40:48.631738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.325 "name": "Existed_Raid", 00:08:23.325 "aliases": [ 00:08:23.325 "42641b76-9b68-43a3-89d9-c5d63ed925e3" 00:08:23.325 ], 00:08:23.325 "product_name": "Raid Volume", 00:08:23.325 "block_size": 512, 00:08:23.325 "num_blocks": 126976, 00:08:23.325 "uuid": "42641b76-9b68-43a3-89d9-c5d63ed925e3", 00:08:23.325 "assigned_rate_limits": { 00:08:23.325 "rw_ios_per_sec": 0, 00:08:23.325 "rw_mbytes_per_sec": 0, 00:08:23.325 "r_mbytes_per_sec": 0, 00:08:23.325 "w_mbytes_per_sec": 0 00:08:23.326 }, 00:08:23.326 "claimed": false, 00:08:23.326 "zoned": false, 00:08:23.326 "supported_io_types": { 00:08:23.326 "read": true, 00:08:23.326 "write": true, 00:08:23.326 "unmap": true, 00:08:23.326 "flush": true, 00:08:23.326 "reset": true, 00:08:23.326 "nvme_admin": false, 00:08:23.326 "nvme_io": false, 00:08:23.326 "nvme_io_md": false, 00:08:23.326 "write_zeroes": true, 00:08:23.326 "zcopy": false, 00:08:23.326 "get_zone_info": false, 00:08:23.326 "zone_management": false, 00:08:23.326 "zone_append": false, 00:08:23.326 "compare": false, 00:08:23.326 "compare_and_write": false, 00:08:23.326 "abort": false, 00:08:23.326 "seek_hole": false, 00:08:23.326 "seek_data": false, 00:08:23.326 "copy": false, 00:08:23.326 "nvme_iov_md": false 00:08:23.326 }, 00:08:23.326 "memory_domains": [ 00:08:23.326 { 00:08:23.326 "dma_device_id": "system", 00:08:23.326 "dma_device_type": 1 00:08:23.326 }, 00:08:23.326 { 00:08:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.326 "dma_device_type": 2 00:08:23.326 }, 00:08:23.326 { 00:08:23.326 "dma_device_id": "system", 00:08:23.326 "dma_device_type": 1 00:08:23.326 }, 00:08:23.326 { 00:08:23.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.326 "dma_device_type": 2 00:08:23.326 } 00:08:23.326 ], 00:08:23.326 "driver_specific": { 00:08:23.326 "raid": { 00:08:23.326 "uuid": "42641b76-9b68-43a3-89d9-c5d63ed925e3", 00:08:23.326 "strip_size_kb": 64, 00:08:23.326 "state": "online", 00:08:23.326 "raid_level": "concat", 00:08:23.326 "superblock": true, 00:08:23.326 "num_base_bdevs": 2, 00:08:23.326 "num_base_bdevs_discovered": 2, 00:08:23.326 "num_base_bdevs_operational": 2, 00:08:23.326 "base_bdevs_list": [ 00:08:23.326 { 00:08:23.326 "name": "BaseBdev1", 00:08:23.326 "uuid": "d3d6c5a4-ed79-4519-b1a9-84faa3c0796c", 00:08:23.326 "is_configured": true, 00:08:23.326 "data_offset": 2048, 00:08:23.326 "data_size": 63488 00:08:23.326 }, 00:08:23.326 { 00:08:23.326 "name": "BaseBdev2", 00:08:23.326 "uuid": "5fd0a1b9-3423-4f85-ae42-fc1d9a40d738", 00:08:23.326 "is_configured": true, 00:08:23.326 "data_offset": 2048, 00:08:23.326 "data_size": 63488 00:08:23.326 } 00:08:23.326 ] 00:08:23.326 } 00:08:23.326 } 00:08:23.326 }' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.326 BaseBdev2' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.326 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.584 [2024-11-04 11:40:48.847093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.584 [2024-11-04 11:40:48.847129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.584 [2024-11-04 11:40:48.847185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.584 11:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.584 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.584 "name": "Existed_Raid", 00:08:23.584 "uuid": "42641b76-9b68-43a3-89d9-c5d63ed925e3", 00:08:23.584 "strip_size_kb": 64, 00:08:23.584 "state": "offline", 00:08:23.584 "raid_level": "concat", 00:08:23.584 "superblock": true, 00:08:23.584 "num_base_bdevs": 2, 00:08:23.584 "num_base_bdevs_discovered": 1, 00:08:23.584 "num_base_bdevs_operational": 1, 00:08:23.584 "base_bdevs_list": [ 00:08:23.584 { 00:08:23.584 "name": null, 00:08:23.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.584 "is_configured": false, 00:08:23.584 "data_offset": 0, 00:08:23.584 "data_size": 63488 00:08:23.584 }, 00:08:23.584 { 00:08:23.584 "name": "BaseBdev2", 00:08:23.584 "uuid": "5fd0a1b9-3423-4f85-ae42-fc1d9a40d738", 00:08:23.584 "is_configured": true, 00:08:23.584 "data_offset": 2048, 00:08:23.584 "data_size": 63488 00:08:23.584 } 00:08:23.584 ] 00:08:23.584 }' 00:08:23.585 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.585 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.849 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.107 [2024-11-04 11:40:49.386970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.107 [2024-11-04 11:40:49.387033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62153 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62153 ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62153 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62153 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62153' 00:08:24.107 killing process with pid 62153 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62153 00:08:24.107 [2024-11-04 11:40:49.584461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.107 11:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62153 00:08:24.107 [2024-11-04 11:40:49.603420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.502 ************************************ 00:08:25.502 END TEST raid_state_function_test_sb 00:08:25.502 ************************************ 00:08:25.502 11:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.502 00:08:25.502 real 0m5.148s 00:08:25.502 user 0m7.372s 00:08:25.502 sys 0m0.790s 00:08:25.502 11:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.502 11:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.502 11:40:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:25.502 11:40:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:25.502 11:40:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.502 11:40:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.502 ************************************ 00:08:25.502 START TEST raid_superblock_test 00:08:25.502 ************************************ 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:25.502 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:25.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62405 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62405 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62405 ']' 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.760 11:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.760 [2024-11-04 11:40:50.986782] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:25.760 [2024-11-04 11:40:50.987004] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62405 ] 00:08:25.760 [2024-11-04 11:40:51.164108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.017 [2024-11-04 11:40:51.299541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.017 [2024-11-04 11:40:51.525865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.017 [2024-11-04 11:40:51.525907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.591 malloc1 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.591 [2024-11-04 11:40:51.936530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.591 [2024-11-04 11:40:51.936653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.591 [2024-11-04 11:40:51.936724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.591 [2024-11-04 11:40:51.936768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.591 [2024-11-04 11:40:51.939006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.591 [2024-11-04 11:40:51.939076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.591 pt1 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.591 malloc2 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.591 11:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.592 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.592 11:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.592 [2024-11-04 11:40:52.001154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.592 [2024-11-04 11:40:52.001239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.592 [2024-11-04 11:40:52.001271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.592 [2024-11-04 11:40:52.001281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.592 [2024-11-04 11:40:52.003743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.592 [2024-11-04 11:40:52.003852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.592 pt2 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.592 [2024-11-04 11:40:52.013185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.592 [2024-11-04 11:40:52.015229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.592 [2024-11-04 11:40:52.015502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.592 [2024-11-04 11:40:52.015522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:26.592 [2024-11-04 11:40:52.015819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.592 [2024-11-04 11:40:52.015989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.592 [2024-11-04 11:40:52.016002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:26.592 [2024-11-04 11:40:52.016212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.592 "name": "raid_bdev1", 00:08:26.592 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:26.592 "strip_size_kb": 64, 00:08:26.592 "state": "online", 00:08:26.592 "raid_level": "concat", 00:08:26.592 "superblock": true, 00:08:26.592 "num_base_bdevs": 2, 00:08:26.592 "num_base_bdevs_discovered": 2, 00:08:26.592 "num_base_bdevs_operational": 2, 00:08:26.592 "base_bdevs_list": [ 00:08:26.592 { 00:08:26.592 "name": "pt1", 00:08:26.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.592 "is_configured": true, 00:08:26.592 "data_offset": 2048, 00:08:26.592 "data_size": 63488 00:08:26.592 }, 00:08:26.592 { 00:08:26.592 "name": "pt2", 00:08:26.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.592 "is_configured": true, 00:08:26.592 "data_offset": 2048, 00:08:26.592 "data_size": 63488 00:08:26.592 } 00:08:26.592 ] 00:08:26.592 }' 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.592 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 [2024-11-04 11:40:52.432798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.160 "name": "raid_bdev1", 00:08:27.160 "aliases": [ 00:08:27.160 "e4d6b7d0-b5a1-4233-9a41-1546264937da" 00:08:27.160 ], 00:08:27.160 "product_name": "Raid Volume", 00:08:27.160 "block_size": 512, 00:08:27.160 "num_blocks": 126976, 00:08:27.160 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:27.160 "assigned_rate_limits": { 00:08:27.160 "rw_ios_per_sec": 0, 00:08:27.160 "rw_mbytes_per_sec": 0, 00:08:27.160 "r_mbytes_per_sec": 0, 00:08:27.160 "w_mbytes_per_sec": 0 00:08:27.160 }, 00:08:27.160 "claimed": false, 00:08:27.160 "zoned": false, 00:08:27.160 "supported_io_types": { 00:08:27.160 "read": true, 00:08:27.160 "write": true, 00:08:27.160 "unmap": true, 00:08:27.160 "flush": true, 00:08:27.160 "reset": true, 00:08:27.160 "nvme_admin": false, 00:08:27.160 "nvme_io": false, 00:08:27.160 "nvme_io_md": false, 00:08:27.160 "write_zeroes": true, 00:08:27.160 "zcopy": false, 00:08:27.160 "get_zone_info": false, 00:08:27.160 "zone_management": false, 00:08:27.160 "zone_append": false, 00:08:27.160 "compare": false, 00:08:27.160 "compare_and_write": false, 00:08:27.160 "abort": false, 00:08:27.160 "seek_hole": false, 00:08:27.160 "seek_data": false, 00:08:27.160 "copy": false, 00:08:27.160 "nvme_iov_md": false 00:08:27.160 }, 00:08:27.160 "memory_domains": [ 00:08:27.160 { 00:08:27.160 "dma_device_id": "system", 00:08:27.160 "dma_device_type": 1 00:08:27.160 }, 00:08:27.160 { 00:08:27.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.160 "dma_device_type": 2 00:08:27.160 }, 00:08:27.160 { 00:08:27.160 "dma_device_id": "system", 00:08:27.160 "dma_device_type": 1 00:08:27.160 }, 00:08:27.160 { 00:08:27.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.160 "dma_device_type": 2 00:08:27.160 } 00:08:27.160 ], 00:08:27.160 "driver_specific": { 00:08:27.160 "raid": { 00:08:27.160 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:27.160 "strip_size_kb": 64, 00:08:27.160 "state": "online", 00:08:27.160 "raid_level": "concat", 00:08:27.160 "superblock": true, 00:08:27.160 "num_base_bdevs": 2, 00:08:27.160 "num_base_bdevs_discovered": 2, 00:08:27.160 "num_base_bdevs_operational": 2, 00:08:27.160 "base_bdevs_list": [ 00:08:27.160 { 00:08:27.160 "name": "pt1", 00:08:27.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.160 "is_configured": true, 00:08:27.160 "data_offset": 2048, 00:08:27.160 "data_size": 63488 00:08:27.160 }, 00:08:27.160 { 00:08:27.160 "name": "pt2", 00:08:27.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.160 "is_configured": true, 00:08:27.160 "data_offset": 2048, 00:08:27.160 "data_size": 63488 00:08:27.160 } 00:08:27.160 ] 00:08:27.160 } 00:08:27.160 } 00:08:27.160 }' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.160 pt2' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.160 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.161 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.161 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.161 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.161 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:27.161 [2024-11-04 11:40:52.656528] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.161 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4d6b7d0-b5a1-4233-9a41-1546264937da 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4d6b7d0-b5a1-4233-9a41-1546264937da ']' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 [2024-11-04 11:40:52.704107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.419 [2024-11-04 11:40:52.704138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.419 [2024-11-04 11:40:52.704237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.419 [2024-11-04 11:40:52.704292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.419 [2024-11-04 11:40:52.704305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 [2024-11-04 11:40:52.831939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.419 [2024-11-04 11:40:52.834142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.419 [2024-11-04 11:40:52.834283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.419 [2024-11-04 11:40:52.834366] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.419 [2024-11-04 11:40:52.834385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.419 [2024-11-04 11:40:52.834409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:27.419 request: 00:08:27.419 { 00:08:27.419 "name": "raid_bdev1", 00:08:27.419 "raid_level": "concat", 00:08:27.419 "base_bdevs": [ 00:08:27.419 "malloc1", 00:08:27.419 "malloc2" 00:08:27.419 ], 00:08:27.419 "strip_size_kb": 64, 00:08:27.419 "superblock": false, 00:08:27.419 "method": "bdev_raid_create", 00:08:27.419 "req_id": 1 00:08:27.419 } 00:08:27.419 Got JSON-RPC error response 00:08:27.419 response: 00:08:27.419 { 00:08:27.419 "code": -17, 00:08:27.419 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.419 } 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.419 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.419 [2024-11-04 11:40:52.879839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.419 [2024-11-04 11:40:52.879992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.419 [2024-11-04 11:40:52.880022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.419 [2024-11-04 11:40:52.880035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.420 [2024-11-04 11:40:52.882655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.420 [2024-11-04 11:40:52.882703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.420 [2024-11-04 11:40:52.882805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.420 [2024-11-04 11:40:52.882881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.420 pt1 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.420 "name": "raid_bdev1", 00:08:27.420 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:27.420 "strip_size_kb": 64, 00:08:27.420 "state": "configuring", 00:08:27.420 "raid_level": "concat", 00:08:27.420 "superblock": true, 00:08:27.420 "num_base_bdevs": 2, 00:08:27.420 "num_base_bdevs_discovered": 1, 00:08:27.420 "num_base_bdevs_operational": 2, 00:08:27.420 "base_bdevs_list": [ 00:08:27.420 { 00:08:27.420 "name": "pt1", 00:08:27.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.420 "is_configured": true, 00:08:27.420 "data_offset": 2048, 00:08:27.420 "data_size": 63488 00:08:27.420 }, 00:08:27.420 { 00:08:27.420 "name": null, 00:08:27.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.420 "is_configured": false, 00:08:27.420 "data_offset": 2048, 00:08:27.420 "data_size": 63488 00:08:27.420 } 00:08:27.420 ] 00:08:27.420 }' 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.420 11:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 [2024-11-04 11:40:53.319137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.986 [2024-11-04 11:40:53.319216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.986 [2024-11-04 11:40:53.319241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:27.986 [2024-11-04 11:40:53.319254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.986 [2024-11-04 11:40:53.319836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.986 [2024-11-04 11:40:53.319867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.986 [2024-11-04 11:40:53.319960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:27.986 [2024-11-04 11:40:53.319988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.986 [2024-11-04 11:40:53.320122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.986 [2024-11-04 11:40:53.320136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.986 [2024-11-04 11:40:53.320411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:27.986 [2024-11-04 11:40:53.320592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.986 [2024-11-04 11:40:53.320603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.986 [2024-11-04 11:40:53.320767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.986 pt2 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.986 "name": "raid_bdev1", 00:08:27.986 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:27.986 "strip_size_kb": 64, 00:08:27.986 "state": "online", 00:08:27.986 "raid_level": "concat", 00:08:27.986 "superblock": true, 00:08:27.986 "num_base_bdevs": 2, 00:08:27.986 "num_base_bdevs_discovered": 2, 00:08:27.986 "num_base_bdevs_operational": 2, 00:08:27.986 "base_bdevs_list": [ 00:08:27.986 { 00:08:27.986 "name": "pt1", 00:08:27.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.986 "is_configured": true, 00:08:27.986 "data_offset": 2048, 00:08:27.986 "data_size": 63488 00:08:27.986 }, 00:08:27.986 { 00:08:27.986 "name": "pt2", 00:08:27.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.986 "is_configured": true, 00:08:27.986 "data_offset": 2048, 00:08:27.986 "data_size": 63488 00:08:27.986 } 00:08:27.986 ] 00:08:27.986 }' 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.986 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.553 [2024-11-04 11:40:53.802654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.553 "name": "raid_bdev1", 00:08:28.553 "aliases": [ 00:08:28.553 "e4d6b7d0-b5a1-4233-9a41-1546264937da" 00:08:28.553 ], 00:08:28.553 "product_name": "Raid Volume", 00:08:28.553 "block_size": 512, 00:08:28.553 "num_blocks": 126976, 00:08:28.553 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:28.553 "assigned_rate_limits": { 00:08:28.553 "rw_ios_per_sec": 0, 00:08:28.553 "rw_mbytes_per_sec": 0, 00:08:28.553 "r_mbytes_per_sec": 0, 00:08:28.553 "w_mbytes_per_sec": 0 00:08:28.553 }, 00:08:28.553 "claimed": false, 00:08:28.553 "zoned": false, 00:08:28.553 "supported_io_types": { 00:08:28.553 "read": true, 00:08:28.553 "write": true, 00:08:28.553 "unmap": true, 00:08:28.553 "flush": true, 00:08:28.553 "reset": true, 00:08:28.553 "nvme_admin": false, 00:08:28.553 "nvme_io": false, 00:08:28.553 "nvme_io_md": false, 00:08:28.553 "write_zeroes": true, 00:08:28.553 "zcopy": false, 00:08:28.553 "get_zone_info": false, 00:08:28.553 "zone_management": false, 00:08:28.553 "zone_append": false, 00:08:28.553 "compare": false, 00:08:28.553 "compare_and_write": false, 00:08:28.553 "abort": false, 00:08:28.553 "seek_hole": false, 00:08:28.553 "seek_data": false, 00:08:28.553 "copy": false, 00:08:28.553 "nvme_iov_md": false 00:08:28.553 }, 00:08:28.553 "memory_domains": [ 00:08:28.553 { 00:08:28.553 "dma_device_id": "system", 00:08:28.553 "dma_device_type": 1 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.553 "dma_device_type": 2 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "dma_device_id": "system", 00:08:28.553 "dma_device_type": 1 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.553 "dma_device_type": 2 00:08:28.553 } 00:08:28.553 ], 00:08:28.553 "driver_specific": { 00:08:28.553 "raid": { 00:08:28.553 "uuid": "e4d6b7d0-b5a1-4233-9a41-1546264937da", 00:08:28.553 "strip_size_kb": 64, 00:08:28.553 "state": "online", 00:08:28.553 "raid_level": "concat", 00:08:28.553 "superblock": true, 00:08:28.553 "num_base_bdevs": 2, 00:08:28.553 "num_base_bdevs_discovered": 2, 00:08:28.553 "num_base_bdevs_operational": 2, 00:08:28.553 "base_bdevs_list": [ 00:08:28.553 { 00:08:28.553 "name": "pt1", 00:08:28.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.553 "is_configured": true, 00:08:28.553 "data_offset": 2048, 00:08:28.553 "data_size": 63488 00:08:28.553 }, 00:08:28.553 { 00:08:28.553 "name": "pt2", 00:08:28.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.553 "is_configured": true, 00:08:28.553 "data_offset": 2048, 00:08:28.553 "data_size": 63488 00:08:28.553 } 00:08:28.553 ] 00:08:28.553 } 00:08:28.553 } 00:08:28.553 }' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.553 pt2' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 11:40:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:28.554 [2024-11-04 11:40:54.054152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.554 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4d6b7d0-b5a1-4233-9a41-1546264937da '!=' e4d6b7d0-b5a1-4233-9a41-1546264937da ']' 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62405 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62405 ']' 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62405 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62405 00:08:28.812 killing process with pid 62405 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62405' 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62405 00:08:28.812 11:40:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62405 00:08:28.812 [2024-11-04 11:40:54.128097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.813 [2024-11-04 11:40:54.128206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.813 [2024-11-04 11:40:54.128322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.813 [2024-11-04 11:40:54.128340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.070 [2024-11-04 11:40:54.362413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.445 11:40:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:30.445 00:08:30.445 real 0m4.695s 00:08:30.445 user 0m6.570s 00:08:30.445 sys 0m0.718s 00:08:30.445 11:40:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.445 11:40:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 ************************************ 00:08:30.445 END TEST raid_superblock_test 00:08:30.445 ************************************ 00:08:30.445 11:40:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:30.445 11:40:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:30.445 11:40:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.445 11:40:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 ************************************ 00:08:30.445 START TEST raid_read_error_test 00:08:30.445 ************************************ 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uxibmi221B 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62622 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62622 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62622 ']' 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.445 11:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 [2024-11-04 11:40:55.757040] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:30.445 [2024-11-04 11:40:55.757251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:08:30.445 [2024-11-04 11:40:55.934992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.703 [2024-11-04 11:40:56.063724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.961 [2024-11-04 11:40:56.288950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.961 [2024-11-04 11:40:56.289094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.219 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 BaseBdev1_malloc 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 true 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [2024-11-04 11:40:56.759599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.477 [2024-11-04 11:40:56.759760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.477 [2024-11-04 11:40:56.759789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.477 [2024-11-04 11:40:56.759826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.477 [2024-11-04 11:40:56.762381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.477 [2024-11-04 11:40:56.762431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.477 BaseBdev1 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 BaseBdev2_malloc 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 true 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [2024-11-04 11:40:56.818097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.477 [2024-11-04 11:40:56.818162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.477 [2024-11-04 11:40:56.818182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.477 [2024-11-04 11:40:56.818193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.477 [2024-11-04 11:40:56.820571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.477 [2024-11-04 11:40:56.820618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.477 BaseBdev2 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [2024-11-04 11:40:56.826180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.477 [2024-11-04 11:40:56.828338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.477 [2024-11-04 11:40:56.828608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.477 [2024-11-04 11:40:56.828633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:31.477 [2024-11-04 11:40:56.828948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.477 [2024-11-04 11:40:56.829167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.477 [2024-11-04 11:40:56.829182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.477 [2024-11-04 11:40:56.829405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.477 "name": "raid_bdev1", 00:08:31.477 "uuid": "bff3d4b3-fc7b-4bc2-8f9f-c4a0c49a82aa", 00:08:31.477 "strip_size_kb": 64, 00:08:31.477 "state": "online", 00:08:31.477 "raid_level": "concat", 00:08:31.477 "superblock": true, 00:08:31.477 "num_base_bdevs": 2, 00:08:31.477 "num_base_bdevs_discovered": 2, 00:08:31.477 "num_base_bdevs_operational": 2, 00:08:31.477 "base_bdevs_list": [ 00:08:31.477 { 00:08:31.477 "name": "BaseBdev1", 00:08:31.477 "uuid": "9b19f802-d61f-5e40-ab3d-4db7fa3927cb", 00:08:31.477 "is_configured": true, 00:08:31.477 "data_offset": 2048, 00:08:31.477 "data_size": 63488 00:08:31.477 }, 00:08:31.477 { 00:08:31.477 "name": "BaseBdev2", 00:08:31.477 "uuid": "1a5f441b-9947-57e9-bd35-9619fed5a89a", 00:08:31.477 "is_configured": true, 00:08:31.477 "data_offset": 2048, 00:08:31.477 "data_size": 63488 00:08:31.477 } 00:08:31.477 ] 00:08:31.477 }' 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.477 11:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.044 11:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.044 11:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.044 [2024-11-04 11:40:57.374842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.978 "name": "raid_bdev1", 00:08:32.978 "uuid": "bff3d4b3-fc7b-4bc2-8f9f-c4a0c49a82aa", 00:08:32.978 "strip_size_kb": 64, 00:08:32.978 "state": "online", 00:08:32.978 "raid_level": "concat", 00:08:32.978 "superblock": true, 00:08:32.978 "num_base_bdevs": 2, 00:08:32.978 "num_base_bdevs_discovered": 2, 00:08:32.978 "num_base_bdevs_operational": 2, 00:08:32.978 "base_bdevs_list": [ 00:08:32.978 { 00:08:32.978 "name": "BaseBdev1", 00:08:32.978 "uuid": "9b19f802-d61f-5e40-ab3d-4db7fa3927cb", 00:08:32.978 "is_configured": true, 00:08:32.978 "data_offset": 2048, 00:08:32.978 "data_size": 63488 00:08:32.978 }, 00:08:32.978 { 00:08:32.978 "name": "BaseBdev2", 00:08:32.978 "uuid": "1a5f441b-9947-57e9-bd35-9619fed5a89a", 00:08:32.978 "is_configured": true, 00:08:32.978 "data_offset": 2048, 00:08:32.978 "data_size": 63488 00:08:32.978 } 00:08:32.978 ] 00:08:32.978 }' 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.978 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.236 [2024-11-04 11:40:58.743544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.236 [2024-11-04 11:40:58.743641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.236 [2024-11-04 11:40:58.746694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.236 [2024-11-04 11:40:58.746779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.236 [2024-11-04 11:40:58.746829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.236 [2024-11-04 11:40:58.746875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.236 { 00:08:33.236 "results": [ 00:08:33.236 { 00:08:33.236 "job": "raid_bdev1", 00:08:33.236 "core_mask": "0x1", 00:08:33.236 "workload": "randrw", 00:08:33.236 "percentage": 50, 00:08:33.236 "status": "finished", 00:08:33.236 "queue_depth": 1, 00:08:33.236 "io_size": 131072, 00:08:33.236 "runtime": 1.369351, 00:08:33.236 "iops": 14411.936749598897, 00:08:33.236 "mibps": 1801.4920936998622, 00:08:33.236 "io_failed": 1, 00:08:33.236 "io_timeout": 0, 00:08:33.236 "avg_latency_us": 96.18217483887754, 00:08:33.236 "min_latency_us": 27.612227074235808, 00:08:33.236 "max_latency_us": 1681.3275109170306 00:08:33.236 } 00:08:33.236 ], 00:08:33.236 "core_count": 1 00:08:33.236 } 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62622 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62622 ']' 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62622 00:08:33.236 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62622 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:33.494 killing process with pid 62622 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62622' 00:08:33.494 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62622 00:08:33.494 [2024-11-04 11:40:58.797486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.495 11:40:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62622 00:08:33.495 [2024-11-04 11:40:58.949720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uxibmi221B 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:34.867 00:08:34.867 real 0m4.515s 00:08:34.867 user 0m5.463s 00:08:34.867 sys 0m0.572s 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.867 11:41:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.867 ************************************ 00:08:34.868 END TEST raid_read_error_test 00:08:34.868 ************************************ 00:08:34.868 11:41:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:34.868 11:41:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:34.868 11:41:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.868 11:41:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.868 ************************************ 00:08:34.868 START TEST raid_write_error_test 00:08:34.868 ************************************ 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vMwJbVv788 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62762 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62762 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62762 ']' 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.868 11:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.868 [2024-11-04 11:41:00.339565] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:34.868 [2024-11-04 11:41:00.340257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:08:35.126 [2024-11-04 11:41:00.515049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.126 [2024-11-04 11:41:00.624570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.385 [2024-11-04 11:41:00.823093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.385 [2024-11-04 11:41:00.823148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.645 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 BaseBdev1_malloc 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 true 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-11-04 11:41:01.213500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:35.904 [2024-11-04 11:41:01.213600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.904 [2024-11-04 11:41:01.213627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:35.904 [2024-11-04 11:41:01.213638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.904 [2024-11-04 11:41:01.215773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.904 [2024-11-04 11:41:01.215815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:35.904 BaseBdev1 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 BaseBdev2_malloc 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 true 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-11-04 11:41:01.280253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:35.904 [2024-11-04 11:41:01.280311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.904 [2024-11-04 11:41:01.280328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:35.904 [2024-11-04 11:41:01.280339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.904 [2024-11-04 11:41:01.282603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.904 [2024-11-04 11:41:01.282642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:35.904 BaseBdev2 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 [2024-11-04 11:41:01.292329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.904 [2024-11-04 11:41:01.294348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.904 [2024-11-04 11:41:01.294645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.904 [2024-11-04 11:41:01.294668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:35.904 [2024-11-04 11:41:01.294929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:35.904 [2024-11-04 11:41:01.295121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.904 [2024-11-04 11:41:01.295135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:35.904 [2024-11-04 11:41:01.295306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.904 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.904 "name": "raid_bdev1", 00:08:35.904 "uuid": "e6c61e6b-d857-44c0-a26e-01a7402d28ad", 00:08:35.904 "strip_size_kb": 64, 00:08:35.904 "state": "online", 00:08:35.904 "raid_level": "concat", 00:08:35.904 "superblock": true, 00:08:35.904 "num_base_bdevs": 2, 00:08:35.905 "num_base_bdevs_discovered": 2, 00:08:35.905 "num_base_bdevs_operational": 2, 00:08:35.905 "base_bdevs_list": [ 00:08:35.905 { 00:08:35.905 "name": "BaseBdev1", 00:08:35.905 "uuid": "e3e36f26-18c7-54e2-9464-4fb8e28d21c2", 00:08:35.905 "is_configured": true, 00:08:35.905 "data_offset": 2048, 00:08:35.905 "data_size": 63488 00:08:35.905 }, 00:08:35.905 { 00:08:35.905 "name": "BaseBdev2", 00:08:35.905 "uuid": "1f53ceb3-8a05-598e-b998-b9bd9ba99850", 00:08:35.905 "is_configured": true, 00:08:35.905 "data_offset": 2048, 00:08:35.905 "data_size": 63488 00:08:35.905 } 00:08:35.905 ] 00:08:35.905 }' 00:08:35.905 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.905 11:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.473 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.473 11:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.473 [2024-11-04 11:41:01.824751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.416 "name": "raid_bdev1", 00:08:37.416 "uuid": "e6c61e6b-d857-44c0-a26e-01a7402d28ad", 00:08:37.416 "strip_size_kb": 64, 00:08:37.416 "state": "online", 00:08:37.416 "raid_level": "concat", 00:08:37.416 "superblock": true, 00:08:37.416 "num_base_bdevs": 2, 00:08:37.416 "num_base_bdevs_discovered": 2, 00:08:37.416 "num_base_bdevs_operational": 2, 00:08:37.416 "base_bdevs_list": [ 00:08:37.416 { 00:08:37.416 "name": "BaseBdev1", 00:08:37.416 "uuid": "e3e36f26-18c7-54e2-9464-4fb8e28d21c2", 00:08:37.416 "is_configured": true, 00:08:37.416 "data_offset": 2048, 00:08:37.416 "data_size": 63488 00:08:37.416 }, 00:08:37.416 { 00:08:37.416 "name": "BaseBdev2", 00:08:37.416 "uuid": "1f53ceb3-8a05-598e-b998-b9bd9ba99850", 00:08:37.416 "is_configured": true, 00:08:37.416 "data_offset": 2048, 00:08:37.416 "data_size": 63488 00:08:37.416 } 00:08:37.416 ] 00:08:37.416 }' 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.416 11:41:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.676 11:41:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.676 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.676 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.936 [2024-11-04 11:41:03.201162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.936 [2024-11-04 11:41:03.201284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.936 [2024-11-04 11:41:03.204417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.936 [2024-11-04 11:41:03.204464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.936 [2024-11-04 11:41:03.204500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.936 [2024-11-04 11:41:03.204517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:37.936 { 00:08:37.936 "results": [ 00:08:37.936 { 00:08:37.936 "job": "raid_bdev1", 00:08:37.936 "core_mask": "0x1", 00:08:37.936 "workload": "randrw", 00:08:37.936 "percentage": 50, 00:08:37.936 "status": "finished", 00:08:37.936 "queue_depth": 1, 00:08:37.936 "io_size": 131072, 00:08:37.936 "runtime": 1.377376, 00:08:37.936 "iops": 14889.906605022885, 00:08:37.936 "mibps": 1861.2383256278606, 00:08:37.936 "io_failed": 1, 00:08:37.936 "io_timeout": 0, 00:08:37.936 "avg_latency_us": 93.21103408923967, 00:08:37.936 "min_latency_us": 27.053275109170304, 00:08:37.936 "max_latency_us": 1688.482096069869 00:08:37.936 } 00:08:37.936 ], 00:08:37.936 "core_count": 1 00:08:37.936 } 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62762 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62762 ']' 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62762 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62762 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62762' 00:08:37.936 killing process with pid 62762 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62762 00:08:37.936 [2024-11-04 11:41:03.251532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.936 11:41:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62762 00:08:37.936 [2024-11-04 11:41:03.397082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vMwJbVv788 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:39.330 00:08:39.330 real 0m4.417s 00:08:39.330 user 0m5.284s 00:08:39.330 sys 0m0.515s 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.330 ************************************ 00:08:39.330 END TEST raid_write_error_test 00:08:39.330 ************************************ 00:08:39.330 11:41:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.330 11:41:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:39.330 11:41:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:39.330 11:41:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:39.330 11:41:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.330 11:41:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.330 ************************************ 00:08:39.330 START TEST raid_state_function_test 00:08:39.330 ************************************ 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.330 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62900 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.331 Process raid pid: 62900 00:08:39.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62900' 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62900 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62900 ']' 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.331 11:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.331 [2024-11-04 11:41:04.813342] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:39.331 [2024-11-04 11:41:04.813590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.589 [2024-11-04 11:41:04.992727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.848 [2024-11-04 11:41:05.118351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.848 [2024-11-04 11:41:05.338960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.848 [2024-11-04 11:41:05.339064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.417 [2024-11-04 11:41:05.691119] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.417 [2024-11-04 11:41:05.691244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.417 [2024-11-04 11:41:05.691295] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.417 [2024-11-04 11:41:05.691340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.417 "name": "Existed_Raid", 00:08:40.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.417 "strip_size_kb": 0, 00:08:40.417 "state": "configuring", 00:08:40.417 "raid_level": "raid1", 00:08:40.417 "superblock": false, 00:08:40.417 "num_base_bdevs": 2, 00:08:40.417 "num_base_bdevs_discovered": 0, 00:08:40.417 "num_base_bdevs_operational": 2, 00:08:40.417 "base_bdevs_list": [ 00:08:40.417 { 00:08:40.417 "name": "BaseBdev1", 00:08:40.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.417 "is_configured": false, 00:08:40.417 "data_offset": 0, 00:08:40.417 "data_size": 0 00:08:40.417 }, 00:08:40.417 { 00:08:40.417 "name": "BaseBdev2", 00:08:40.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.417 "is_configured": false, 00:08:40.417 "data_offset": 0, 00:08:40.417 "data_size": 0 00:08:40.417 } 00:08:40.417 ] 00:08:40.417 }' 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.417 11:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 [2024-11-04 11:41:06.154311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.677 [2024-11-04 11:41:06.154353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.677 [2024-11-04 11:41:06.166326] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.677 [2024-11-04 11:41:06.166476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.677 [2024-11-04 11:41:06.166548] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.677 [2024-11-04 11:41:06.166593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.677 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 [2024-11-04 11:41:06.214319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.936 BaseBdev1 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 [ 00:08:40.936 { 00:08:40.936 "name": "BaseBdev1", 00:08:40.936 "aliases": [ 00:08:40.936 "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b" 00:08:40.936 ], 00:08:40.936 "product_name": "Malloc disk", 00:08:40.936 "block_size": 512, 00:08:40.936 "num_blocks": 65536, 00:08:40.936 "uuid": "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b", 00:08:40.936 "assigned_rate_limits": { 00:08:40.936 "rw_ios_per_sec": 0, 00:08:40.936 "rw_mbytes_per_sec": 0, 00:08:40.936 "r_mbytes_per_sec": 0, 00:08:40.936 "w_mbytes_per_sec": 0 00:08:40.936 }, 00:08:40.936 "claimed": true, 00:08:40.936 "claim_type": "exclusive_write", 00:08:40.936 "zoned": false, 00:08:40.936 "supported_io_types": { 00:08:40.936 "read": true, 00:08:40.936 "write": true, 00:08:40.936 "unmap": true, 00:08:40.936 "flush": true, 00:08:40.936 "reset": true, 00:08:40.936 "nvme_admin": false, 00:08:40.936 "nvme_io": false, 00:08:40.936 "nvme_io_md": false, 00:08:40.936 "write_zeroes": true, 00:08:40.936 "zcopy": true, 00:08:40.936 "get_zone_info": false, 00:08:40.936 "zone_management": false, 00:08:40.936 "zone_append": false, 00:08:40.936 "compare": false, 00:08:40.936 "compare_and_write": false, 00:08:40.936 "abort": true, 00:08:40.936 "seek_hole": false, 00:08:40.936 "seek_data": false, 00:08:40.936 "copy": true, 00:08:40.936 "nvme_iov_md": false 00:08:40.936 }, 00:08:40.936 "memory_domains": [ 00:08:40.936 { 00:08:40.936 "dma_device_id": "system", 00:08:40.936 "dma_device_type": 1 00:08:40.936 }, 00:08:40.936 { 00:08:40.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.936 "dma_device_type": 2 00:08:40.936 } 00:08:40.936 ], 00:08:40.936 "driver_specific": {} 00:08:40.936 } 00:08:40.936 ] 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.936 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.937 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.937 "name": "Existed_Raid", 00:08:40.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.937 "strip_size_kb": 0, 00:08:40.937 "state": "configuring", 00:08:40.937 "raid_level": "raid1", 00:08:40.937 "superblock": false, 00:08:40.937 "num_base_bdevs": 2, 00:08:40.937 "num_base_bdevs_discovered": 1, 00:08:40.937 "num_base_bdevs_operational": 2, 00:08:40.937 "base_bdevs_list": [ 00:08:40.937 { 00:08:40.937 "name": "BaseBdev1", 00:08:40.937 "uuid": "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b", 00:08:40.937 "is_configured": true, 00:08:40.937 "data_offset": 0, 00:08:40.937 "data_size": 65536 00:08:40.937 }, 00:08:40.937 { 00:08:40.937 "name": "BaseBdev2", 00:08:40.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.937 "is_configured": false, 00:08:40.937 "data_offset": 0, 00:08:40.937 "data_size": 0 00:08:40.937 } 00:08:40.937 ] 00:08:40.937 }' 00:08:40.937 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.937 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 [2024-11-04 11:41:06.681586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.196 [2024-11-04 11:41:06.681648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.196 [2024-11-04 11:41:06.693603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.196 [2024-11-04 11:41:06.695690] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.196 [2024-11-04 11:41:06.695784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.196 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.455 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.455 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.455 "name": "Existed_Raid", 00:08:41.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.455 "strip_size_kb": 0, 00:08:41.455 "state": "configuring", 00:08:41.455 "raid_level": "raid1", 00:08:41.455 "superblock": false, 00:08:41.455 "num_base_bdevs": 2, 00:08:41.455 "num_base_bdevs_discovered": 1, 00:08:41.455 "num_base_bdevs_operational": 2, 00:08:41.455 "base_bdevs_list": [ 00:08:41.455 { 00:08:41.455 "name": "BaseBdev1", 00:08:41.455 "uuid": "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b", 00:08:41.455 "is_configured": true, 00:08:41.455 "data_offset": 0, 00:08:41.455 "data_size": 65536 00:08:41.455 }, 00:08:41.455 { 00:08:41.455 "name": "BaseBdev2", 00:08:41.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.455 "is_configured": false, 00:08:41.455 "data_offset": 0, 00:08:41.455 "data_size": 0 00:08:41.455 } 00:08:41.455 ] 00:08:41.455 }' 00:08:41.455 11:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.455 11:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.714 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.714 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.714 [2024-11-04 11:41:07.179044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.715 [2024-11-04 11:41:07.179204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:41.715 [2024-11-04 11:41:07.179230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:41.715 [2024-11-04 11:41:07.179634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.715 [2024-11-04 11:41:07.179881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:41.715 [2024-11-04 11:41:07.179939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:41.715 [2024-11-04 11:41:07.180323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.715 BaseBdev2 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 [ 00:08:41.715 { 00:08:41.715 "name": "BaseBdev2", 00:08:41.715 "aliases": [ 00:08:41.715 "9b2612a5-45e8-451e-afd9-22561db21400" 00:08:41.715 ], 00:08:41.715 "product_name": "Malloc disk", 00:08:41.715 "block_size": 512, 00:08:41.715 "num_blocks": 65536, 00:08:41.715 "uuid": "9b2612a5-45e8-451e-afd9-22561db21400", 00:08:41.715 "assigned_rate_limits": { 00:08:41.715 "rw_ios_per_sec": 0, 00:08:41.715 "rw_mbytes_per_sec": 0, 00:08:41.715 "r_mbytes_per_sec": 0, 00:08:41.715 "w_mbytes_per_sec": 0 00:08:41.715 }, 00:08:41.715 "claimed": true, 00:08:41.715 "claim_type": "exclusive_write", 00:08:41.715 "zoned": false, 00:08:41.715 "supported_io_types": { 00:08:41.715 "read": true, 00:08:41.715 "write": true, 00:08:41.715 "unmap": true, 00:08:41.715 "flush": true, 00:08:41.715 "reset": true, 00:08:41.715 "nvme_admin": false, 00:08:41.715 "nvme_io": false, 00:08:41.715 "nvme_io_md": false, 00:08:41.715 "write_zeroes": true, 00:08:41.715 "zcopy": true, 00:08:41.715 "get_zone_info": false, 00:08:41.715 "zone_management": false, 00:08:41.715 "zone_append": false, 00:08:41.715 "compare": false, 00:08:41.715 "compare_and_write": false, 00:08:41.715 "abort": true, 00:08:41.715 "seek_hole": false, 00:08:41.715 "seek_data": false, 00:08:41.715 "copy": true, 00:08:41.715 "nvme_iov_md": false 00:08:41.715 }, 00:08:41.715 "memory_domains": [ 00:08:41.715 { 00:08:41.715 "dma_device_id": "system", 00:08:41.715 "dma_device_type": 1 00:08:41.715 }, 00:08:41.715 { 00:08:41.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.715 "dma_device_type": 2 00:08:41.715 } 00:08:41.715 ], 00:08:41.715 "driver_specific": {} 00:08:41.715 } 00:08:41.715 ] 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.715 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.974 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.974 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.974 "name": "Existed_Raid", 00:08:41.974 "uuid": "5957f586-6e90-4b64-98f7-221d14678200", 00:08:41.974 "strip_size_kb": 0, 00:08:41.974 "state": "online", 00:08:41.974 "raid_level": "raid1", 00:08:41.974 "superblock": false, 00:08:41.974 "num_base_bdevs": 2, 00:08:41.974 "num_base_bdevs_discovered": 2, 00:08:41.974 "num_base_bdevs_operational": 2, 00:08:41.974 "base_bdevs_list": [ 00:08:41.974 { 00:08:41.974 "name": "BaseBdev1", 00:08:41.974 "uuid": "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b", 00:08:41.974 "is_configured": true, 00:08:41.974 "data_offset": 0, 00:08:41.974 "data_size": 65536 00:08:41.974 }, 00:08:41.974 { 00:08:41.974 "name": "BaseBdev2", 00:08:41.974 "uuid": "9b2612a5-45e8-451e-afd9-22561db21400", 00:08:41.974 "is_configured": true, 00:08:41.974 "data_offset": 0, 00:08:41.974 "data_size": 65536 00:08:41.974 } 00:08:41.974 ] 00:08:41.974 }' 00:08:41.974 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.974 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.234 [2024-11-04 11:41:07.722533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.234 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.493 "name": "Existed_Raid", 00:08:42.493 "aliases": [ 00:08:42.493 "5957f586-6e90-4b64-98f7-221d14678200" 00:08:42.493 ], 00:08:42.493 "product_name": "Raid Volume", 00:08:42.493 "block_size": 512, 00:08:42.493 "num_blocks": 65536, 00:08:42.493 "uuid": "5957f586-6e90-4b64-98f7-221d14678200", 00:08:42.493 "assigned_rate_limits": { 00:08:42.493 "rw_ios_per_sec": 0, 00:08:42.493 "rw_mbytes_per_sec": 0, 00:08:42.493 "r_mbytes_per_sec": 0, 00:08:42.493 "w_mbytes_per_sec": 0 00:08:42.493 }, 00:08:42.493 "claimed": false, 00:08:42.493 "zoned": false, 00:08:42.493 "supported_io_types": { 00:08:42.493 "read": true, 00:08:42.493 "write": true, 00:08:42.493 "unmap": false, 00:08:42.493 "flush": false, 00:08:42.493 "reset": true, 00:08:42.493 "nvme_admin": false, 00:08:42.493 "nvme_io": false, 00:08:42.493 "nvme_io_md": false, 00:08:42.493 "write_zeroes": true, 00:08:42.493 "zcopy": false, 00:08:42.493 "get_zone_info": false, 00:08:42.493 "zone_management": false, 00:08:42.493 "zone_append": false, 00:08:42.493 "compare": false, 00:08:42.493 "compare_and_write": false, 00:08:42.493 "abort": false, 00:08:42.493 "seek_hole": false, 00:08:42.493 "seek_data": false, 00:08:42.493 "copy": false, 00:08:42.493 "nvme_iov_md": false 00:08:42.493 }, 00:08:42.493 "memory_domains": [ 00:08:42.493 { 00:08:42.493 "dma_device_id": "system", 00:08:42.493 "dma_device_type": 1 00:08:42.493 }, 00:08:42.493 { 00:08:42.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.493 "dma_device_type": 2 00:08:42.493 }, 00:08:42.493 { 00:08:42.493 "dma_device_id": "system", 00:08:42.493 "dma_device_type": 1 00:08:42.493 }, 00:08:42.493 { 00:08:42.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.493 "dma_device_type": 2 00:08:42.493 } 00:08:42.493 ], 00:08:42.493 "driver_specific": { 00:08:42.493 "raid": { 00:08:42.493 "uuid": "5957f586-6e90-4b64-98f7-221d14678200", 00:08:42.493 "strip_size_kb": 0, 00:08:42.493 "state": "online", 00:08:42.493 "raid_level": "raid1", 00:08:42.493 "superblock": false, 00:08:42.493 "num_base_bdevs": 2, 00:08:42.493 "num_base_bdevs_discovered": 2, 00:08:42.493 "num_base_bdevs_operational": 2, 00:08:42.493 "base_bdevs_list": [ 00:08:42.493 { 00:08:42.493 "name": "BaseBdev1", 00:08:42.493 "uuid": "63c5c665-9ab1-40e9-bd87-2885bc7f5c0b", 00:08:42.493 "is_configured": true, 00:08:42.493 "data_offset": 0, 00:08:42.493 "data_size": 65536 00:08:42.493 }, 00:08:42.493 { 00:08:42.493 "name": "BaseBdev2", 00:08:42.493 "uuid": "9b2612a5-45e8-451e-afd9-22561db21400", 00:08:42.493 "is_configured": true, 00:08:42.493 "data_offset": 0, 00:08:42.493 "data_size": 65536 00:08:42.493 } 00:08:42.493 ] 00:08:42.493 } 00:08:42.493 } 00:08:42.493 }' 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.493 BaseBdev2' 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.493 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.494 11:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 [2024-11-04 11:41:07.941880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.752 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.752 "name": "Existed_Raid", 00:08:42.752 "uuid": "5957f586-6e90-4b64-98f7-221d14678200", 00:08:42.752 "strip_size_kb": 0, 00:08:42.752 "state": "online", 00:08:42.752 "raid_level": "raid1", 00:08:42.752 "superblock": false, 00:08:42.752 "num_base_bdevs": 2, 00:08:42.752 "num_base_bdevs_discovered": 1, 00:08:42.752 "num_base_bdevs_operational": 1, 00:08:42.752 "base_bdevs_list": [ 00:08:42.752 { 00:08:42.752 "name": null, 00:08:42.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.753 "is_configured": false, 00:08:42.753 "data_offset": 0, 00:08:42.753 "data_size": 65536 00:08:42.753 }, 00:08:42.753 { 00:08:42.753 "name": "BaseBdev2", 00:08:42.753 "uuid": "9b2612a5-45e8-451e-afd9-22561db21400", 00:08:42.753 "is_configured": true, 00:08:42.753 "data_offset": 0, 00:08:42.753 "data_size": 65536 00:08:42.753 } 00:08:42.753 ] 00:08:42.753 }' 00:08:42.753 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.753 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.011 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.270 [2024-11-04 11:41:08.552309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.270 [2024-11-04 11:41:08.552444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.270 [2024-11-04 11:41:08.666772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.270 [2024-11-04 11:41:08.666838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.270 [2024-11-04 11:41:08.666852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62900 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62900 ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62900 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62900 00:08:43.270 killing process with pid 62900 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62900' 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62900 00:08:43.270 [2024-11-04 11:41:08.747596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.270 11:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62900 00:08:43.270 [2024-11-04 11:41:08.768295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.649 11:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.649 00:08:44.649 real 0m5.238s 00:08:44.649 user 0m7.547s 00:08:44.649 sys 0m0.839s 00:08:44.649 11:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.649 ************************************ 00:08:44.649 END TEST raid_state_function_test 00:08:44.649 ************************************ 00:08:44.649 11:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.649 11:41:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:44.649 11:41:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:44.649 11:41:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.649 11:41:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.649 ************************************ 00:08:44.649 START TEST raid_state_function_test_sb 00:08:44.649 ************************************ 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63159 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63159' 00:08:44.649 Process raid pid: 63159 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63159 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63159 ']' 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.649 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.649 [2024-11-04 11:41:10.113483] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:44.649 [2024-11-04 11:41:10.113702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.908 [2024-11-04 11:41:10.287515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.908 [2024-11-04 11:41:10.409321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.168 [2024-11-04 11:41:10.617452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.168 [2024-11-04 11:41:10.617570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.737 [2024-11-04 11:41:10.969856] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.737 [2024-11-04 11:41:10.969975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.737 [2024-11-04 11:41:10.970007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.737 [2024-11-04 11:41:10.970018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.737 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.738 11:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.738 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.738 "name": "Existed_Raid", 00:08:45.738 "uuid": "f4a0af7b-a199-4ea1-8b38-6af01ced8750", 00:08:45.738 "strip_size_kb": 0, 00:08:45.738 "state": "configuring", 00:08:45.738 "raid_level": "raid1", 00:08:45.738 "superblock": true, 00:08:45.738 "num_base_bdevs": 2, 00:08:45.738 "num_base_bdevs_discovered": 0, 00:08:45.738 "num_base_bdevs_operational": 2, 00:08:45.738 "base_bdevs_list": [ 00:08:45.738 { 00:08:45.738 "name": "BaseBdev1", 00:08:45.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.738 "is_configured": false, 00:08:45.738 "data_offset": 0, 00:08:45.738 "data_size": 0 00:08:45.738 }, 00:08:45.738 { 00:08:45.738 "name": "BaseBdev2", 00:08:45.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.738 "is_configured": false, 00:08:45.738 "data_offset": 0, 00:08:45.738 "data_size": 0 00:08:45.738 } 00:08:45.738 ] 00:08:45.738 }' 00:08:45.738 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.738 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.998 [2024-11-04 11:41:11.441028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.998 [2024-11-04 11:41:11.441131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.998 [2024-11-04 11:41:11.453038] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.998 [2024-11-04 11:41:11.453156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.998 [2024-11-04 11:41:11.453189] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.998 [2024-11-04 11:41:11.453218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.998 [2024-11-04 11:41:11.503155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.998 BaseBdev1 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.998 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.257 [ 00:08:46.257 { 00:08:46.257 "name": "BaseBdev1", 00:08:46.257 "aliases": [ 00:08:46.257 "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33" 00:08:46.257 ], 00:08:46.257 "product_name": "Malloc disk", 00:08:46.257 "block_size": 512, 00:08:46.257 "num_blocks": 65536, 00:08:46.257 "uuid": "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33", 00:08:46.257 "assigned_rate_limits": { 00:08:46.257 "rw_ios_per_sec": 0, 00:08:46.257 "rw_mbytes_per_sec": 0, 00:08:46.257 "r_mbytes_per_sec": 0, 00:08:46.257 "w_mbytes_per_sec": 0 00:08:46.257 }, 00:08:46.257 "claimed": true, 00:08:46.257 "claim_type": "exclusive_write", 00:08:46.257 "zoned": false, 00:08:46.257 "supported_io_types": { 00:08:46.257 "read": true, 00:08:46.257 "write": true, 00:08:46.257 "unmap": true, 00:08:46.257 "flush": true, 00:08:46.257 "reset": true, 00:08:46.257 "nvme_admin": false, 00:08:46.257 "nvme_io": false, 00:08:46.257 "nvme_io_md": false, 00:08:46.257 "write_zeroes": true, 00:08:46.257 "zcopy": true, 00:08:46.257 "get_zone_info": false, 00:08:46.257 "zone_management": false, 00:08:46.257 "zone_append": false, 00:08:46.257 "compare": false, 00:08:46.257 "compare_and_write": false, 00:08:46.257 "abort": true, 00:08:46.257 "seek_hole": false, 00:08:46.257 "seek_data": false, 00:08:46.257 "copy": true, 00:08:46.257 "nvme_iov_md": false 00:08:46.257 }, 00:08:46.257 "memory_domains": [ 00:08:46.257 { 00:08:46.257 "dma_device_id": "system", 00:08:46.257 "dma_device_type": 1 00:08:46.257 }, 00:08:46.257 { 00:08:46.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.257 "dma_device_type": 2 00:08:46.257 } 00:08:46.257 ], 00:08:46.257 "driver_specific": {} 00:08:46.257 } 00:08:46.257 ] 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.257 "name": "Existed_Raid", 00:08:46.257 "uuid": "5ebe028f-9bf6-468e-9f7b-2c3e172d59d9", 00:08:46.257 "strip_size_kb": 0, 00:08:46.257 "state": "configuring", 00:08:46.257 "raid_level": "raid1", 00:08:46.257 "superblock": true, 00:08:46.257 "num_base_bdevs": 2, 00:08:46.257 "num_base_bdevs_discovered": 1, 00:08:46.257 "num_base_bdevs_operational": 2, 00:08:46.257 "base_bdevs_list": [ 00:08:46.257 { 00:08:46.257 "name": "BaseBdev1", 00:08:46.257 "uuid": "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33", 00:08:46.257 "is_configured": true, 00:08:46.257 "data_offset": 2048, 00:08:46.257 "data_size": 63488 00:08:46.257 }, 00:08:46.257 { 00:08:46.257 "name": "BaseBdev2", 00:08:46.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.257 "is_configured": false, 00:08:46.257 "data_offset": 0, 00:08:46.257 "data_size": 0 00:08:46.257 } 00:08:46.257 ] 00:08:46.257 }' 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.257 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.516 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.516 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.516 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.516 [2024-11-04 11:41:11.974454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.516 [2024-11-04 11:41:11.974515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.517 [2024-11-04 11:41:11.986521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.517 [2024-11-04 11:41:11.988730] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.517 [2024-11-04 11:41:11.988827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.517 11:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.517 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.517 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.517 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.776 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.776 "name": "Existed_Raid", 00:08:46.776 "uuid": "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a", 00:08:46.776 "strip_size_kb": 0, 00:08:46.776 "state": "configuring", 00:08:46.776 "raid_level": "raid1", 00:08:46.776 "superblock": true, 00:08:46.776 "num_base_bdevs": 2, 00:08:46.776 "num_base_bdevs_discovered": 1, 00:08:46.776 "num_base_bdevs_operational": 2, 00:08:46.776 "base_bdevs_list": [ 00:08:46.776 { 00:08:46.776 "name": "BaseBdev1", 00:08:46.776 "uuid": "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33", 00:08:46.776 "is_configured": true, 00:08:46.776 "data_offset": 2048, 00:08:46.776 "data_size": 63488 00:08:46.776 }, 00:08:46.776 { 00:08:46.776 "name": "BaseBdev2", 00:08:46.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.776 "is_configured": false, 00:08:46.776 "data_offset": 0, 00:08:46.776 "data_size": 0 00:08:46.776 } 00:08:46.776 ] 00:08:46.776 }' 00:08:46.776 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.776 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.036 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.036 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.036 [2024-11-04 11:41:12.441427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.036 [2024-11-04 11:41:12.441810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.036 [2024-11-04 11:41:12.441831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.036 [2024-11-04 11:41:12.442126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:47.036 [2024-11-04 11:41:12.442288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.037 [2024-11-04 11:41:12.442302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.037 BaseBdev2 00:08:47.037 [2024-11-04 11:41:12.442520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 [ 00:08:47.037 { 00:08:47.037 "name": "BaseBdev2", 00:08:47.037 "aliases": [ 00:08:47.037 "686d2487-69a8-423d-9974-2ed0a7e94d70" 00:08:47.037 ], 00:08:47.037 "product_name": "Malloc disk", 00:08:47.037 "block_size": 512, 00:08:47.037 "num_blocks": 65536, 00:08:47.037 "uuid": "686d2487-69a8-423d-9974-2ed0a7e94d70", 00:08:47.037 "assigned_rate_limits": { 00:08:47.037 "rw_ios_per_sec": 0, 00:08:47.037 "rw_mbytes_per_sec": 0, 00:08:47.037 "r_mbytes_per_sec": 0, 00:08:47.037 "w_mbytes_per_sec": 0 00:08:47.037 }, 00:08:47.037 "claimed": true, 00:08:47.037 "claim_type": "exclusive_write", 00:08:47.037 "zoned": false, 00:08:47.037 "supported_io_types": { 00:08:47.037 "read": true, 00:08:47.037 "write": true, 00:08:47.037 "unmap": true, 00:08:47.037 "flush": true, 00:08:47.037 "reset": true, 00:08:47.037 "nvme_admin": false, 00:08:47.037 "nvme_io": false, 00:08:47.037 "nvme_io_md": false, 00:08:47.037 "write_zeroes": true, 00:08:47.037 "zcopy": true, 00:08:47.037 "get_zone_info": false, 00:08:47.037 "zone_management": false, 00:08:47.037 "zone_append": false, 00:08:47.037 "compare": false, 00:08:47.037 "compare_and_write": false, 00:08:47.037 "abort": true, 00:08:47.037 "seek_hole": false, 00:08:47.037 "seek_data": false, 00:08:47.037 "copy": true, 00:08:47.037 "nvme_iov_md": false 00:08:47.037 }, 00:08:47.037 "memory_domains": [ 00:08:47.037 { 00:08:47.037 "dma_device_id": "system", 00:08:47.037 "dma_device_type": 1 00:08:47.037 }, 00:08:47.037 { 00:08:47.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.037 "dma_device_type": 2 00:08:47.037 } 00:08:47.037 ], 00:08:47.037 "driver_specific": {} 00:08:47.037 } 00:08:47.037 ] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.037 "name": "Existed_Raid", 00:08:47.037 "uuid": "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a", 00:08:47.037 "strip_size_kb": 0, 00:08:47.037 "state": "online", 00:08:47.037 "raid_level": "raid1", 00:08:47.037 "superblock": true, 00:08:47.037 "num_base_bdevs": 2, 00:08:47.037 "num_base_bdevs_discovered": 2, 00:08:47.037 "num_base_bdevs_operational": 2, 00:08:47.037 "base_bdevs_list": [ 00:08:47.037 { 00:08:47.037 "name": "BaseBdev1", 00:08:47.037 "uuid": "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33", 00:08:47.037 "is_configured": true, 00:08:47.037 "data_offset": 2048, 00:08:47.037 "data_size": 63488 00:08:47.037 }, 00:08:47.037 { 00:08:47.037 "name": "BaseBdev2", 00:08:47.037 "uuid": "686d2487-69a8-423d-9974-2ed0a7e94d70", 00:08:47.037 "is_configured": true, 00:08:47.037 "data_offset": 2048, 00:08:47.037 "data_size": 63488 00:08:47.037 } 00:08:47.037 ] 00:08:47.037 }' 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.037 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.607 [2024-11-04 11:41:12.968888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.607 11:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.607 "name": "Existed_Raid", 00:08:47.607 "aliases": [ 00:08:47.607 "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a" 00:08:47.607 ], 00:08:47.607 "product_name": "Raid Volume", 00:08:47.607 "block_size": 512, 00:08:47.607 "num_blocks": 63488, 00:08:47.607 "uuid": "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a", 00:08:47.607 "assigned_rate_limits": { 00:08:47.607 "rw_ios_per_sec": 0, 00:08:47.607 "rw_mbytes_per_sec": 0, 00:08:47.607 "r_mbytes_per_sec": 0, 00:08:47.607 "w_mbytes_per_sec": 0 00:08:47.607 }, 00:08:47.607 "claimed": false, 00:08:47.607 "zoned": false, 00:08:47.607 "supported_io_types": { 00:08:47.607 "read": true, 00:08:47.607 "write": true, 00:08:47.607 "unmap": false, 00:08:47.607 "flush": false, 00:08:47.607 "reset": true, 00:08:47.607 "nvme_admin": false, 00:08:47.607 "nvme_io": false, 00:08:47.607 "nvme_io_md": false, 00:08:47.607 "write_zeroes": true, 00:08:47.607 "zcopy": false, 00:08:47.607 "get_zone_info": false, 00:08:47.607 "zone_management": false, 00:08:47.607 "zone_append": false, 00:08:47.607 "compare": false, 00:08:47.607 "compare_and_write": false, 00:08:47.607 "abort": false, 00:08:47.607 "seek_hole": false, 00:08:47.607 "seek_data": false, 00:08:47.607 "copy": false, 00:08:47.607 "nvme_iov_md": false 00:08:47.607 }, 00:08:47.607 "memory_domains": [ 00:08:47.607 { 00:08:47.607 "dma_device_id": "system", 00:08:47.607 "dma_device_type": 1 00:08:47.607 }, 00:08:47.607 { 00:08:47.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.607 "dma_device_type": 2 00:08:47.607 }, 00:08:47.607 { 00:08:47.607 "dma_device_id": "system", 00:08:47.607 "dma_device_type": 1 00:08:47.607 }, 00:08:47.607 { 00:08:47.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.607 "dma_device_type": 2 00:08:47.607 } 00:08:47.607 ], 00:08:47.607 "driver_specific": { 00:08:47.607 "raid": { 00:08:47.607 "uuid": "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a", 00:08:47.607 "strip_size_kb": 0, 00:08:47.607 "state": "online", 00:08:47.607 "raid_level": "raid1", 00:08:47.607 "superblock": true, 00:08:47.607 "num_base_bdevs": 2, 00:08:47.607 "num_base_bdevs_discovered": 2, 00:08:47.607 "num_base_bdevs_operational": 2, 00:08:47.607 "base_bdevs_list": [ 00:08:47.607 { 00:08:47.607 "name": "BaseBdev1", 00:08:47.607 "uuid": "85e99dfb-eeaa-4be9-85a4-c4dbca6b6d33", 00:08:47.607 "is_configured": true, 00:08:47.607 "data_offset": 2048, 00:08:47.607 "data_size": 63488 00:08:47.607 }, 00:08:47.607 { 00:08:47.607 "name": "BaseBdev2", 00:08:47.607 "uuid": "686d2487-69a8-423d-9974-2ed0a7e94d70", 00:08:47.607 "is_configured": true, 00:08:47.607 "data_offset": 2048, 00:08:47.607 "data_size": 63488 00:08:47.607 } 00:08:47.607 ] 00:08:47.607 } 00:08:47.607 } 00:08:47.607 }' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.607 BaseBdev2' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.607 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.866 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.867 [2024-11-04 11:41:13.212278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.867 "name": "Existed_Raid", 00:08:47.867 "uuid": "46c30b69-3c81-4853-8c4d-7f3a4b6fd23a", 00:08:47.867 "strip_size_kb": 0, 00:08:47.867 "state": "online", 00:08:47.867 "raid_level": "raid1", 00:08:47.867 "superblock": true, 00:08:47.867 "num_base_bdevs": 2, 00:08:47.867 "num_base_bdevs_discovered": 1, 00:08:47.867 "num_base_bdevs_operational": 1, 00:08:47.867 "base_bdevs_list": [ 00:08:47.867 { 00:08:47.867 "name": null, 00:08:47.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.867 "is_configured": false, 00:08:47.867 "data_offset": 0, 00:08:47.867 "data_size": 63488 00:08:47.867 }, 00:08:47.867 { 00:08:47.867 "name": "BaseBdev2", 00:08:47.867 "uuid": "686d2487-69a8-423d-9974-2ed0a7e94d70", 00:08:47.867 "is_configured": true, 00:08:47.867 "data_offset": 2048, 00:08:47.867 "data_size": 63488 00:08:47.867 } 00:08:47.867 ] 00:08:47.867 }' 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.867 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 [2024-11-04 11:41:13.831070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.436 [2024-11-04 11:41:13.831246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.436 [2024-11-04 11:41:13.937720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.436 [2024-11-04 11:41:13.937856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.436 [2024-11-04 11:41:13.937900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.436 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63159 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63159 ']' 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63159 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.696 11:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63159 00:08:48.696 11:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.696 11:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.696 11:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63159' 00:08:48.696 killing process with pid 63159 00:08:48.696 11:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63159 00:08:48.696 [2024-11-04 11:41:14.035271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.696 11:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63159 00:08:48.696 [2024-11-04 11:41:14.052537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.072 11:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.072 00:08:50.072 real 0m5.229s 00:08:50.072 user 0m7.513s 00:08:50.072 sys 0m0.809s 00:08:50.072 11:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.072 11:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.072 ************************************ 00:08:50.072 END TEST raid_state_function_test_sb 00:08:50.072 ************************************ 00:08:50.073 11:41:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:50.073 11:41:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:50.073 11:41:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.073 11:41:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.073 ************************************ 00:08:50.073 START TEST raid_superblock_test 00:08:50.073 ************************************ 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63411 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63411 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63411 ']' 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.073 11:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.073 [2024-11-04 11:41:15.414058] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:50.073 [2024-11-04 11:41:15.414807] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63411 ] 00:08:50.332 [2024-11-04 11:41:15.612703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.332 [2024-11-04 11:41:15.736030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.591 [2024-11-04 11:41:15.943358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.591 [2024-11-04 11:41:15.943531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.850 malloc1 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.850 [2024-11-04 11:41:16.358687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.850 [2024-11-04 11:41:16.358852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.850 [2024-11-04 11:41:16.358906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.850 [2024-11-04 11:41:16.358962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.850 [2024-11-04 11:41:16.361558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.850 [2024-11-04 11:41:16.361647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.850 pt1 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.850 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.125 malloc2 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.125 [2024-11-04 11:41:16.424175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.125 [2024-11-04 11:41:16.424343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.125 [2024-11-04 11:41:16.424419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.125 [2024-11-04 11:41:16.424463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.125 [2024-11-04 11:41:16.427005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.125 [2024-11-04 11:41:16.427089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.125 pt2 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.125 [2024-11-04 11:41:16.436244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.125 [2024-11-04 11:41:16.438360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.125 [2024-11-04 11:41:16.438716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:51.125 [2024-11-04 11:41:16.438785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.125 [2024-11-04 11:41:16.439169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:51.125 [2024-11-04 11:41:16.439419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:51.125 [2024-11-04 11:41:16.439476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:51.125 [2024-11-04 11:41:16.439750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.125 "name": "raid_bdev1", 00:08:51.125 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:51.125 "strip_size_kb": 0, 00:08:51.125 "state": "online", 00:08:51.125 "raid_level": "raid1", 00:08:51.125 "superblock": true, 00:08:51.125 "num_base_bdevs": 2, 00:08:51.125 "num_base_bdevs_discovered": 2, 00:08:51.125 "num_base_bdevs_operational": 2, 00:08:51.125 "base_bdevs_list": [ 00:08:51.125 { 00:08:51.125 "name": "pt1", 00:08:51.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.125 "is_configured": true, 00:08:51.125 "data_offset": 2048, 00:08:51.125 "data_size": 63488 00:08:51.125 }, 00:08:51.125 { 00:08:51.125 "name": "pt2", 00:08:51.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.125 "is_configured": true, 00:08:51.125 "data_offset": 2048, 00:08:51.125 "data_size": 63488 00:08:51.125 } 00:08:51.125 ] 00:08:51.125 }' 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.125 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.398 [2024-11-04 11:41:16.883861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.398 11:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.657 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.657 "name": "raid_bdev1", 00:08:51.657 "aliases": [ 00:08:51.657 "3a3b67c4-6b72-4205-853d-dd870e543b60" 00:08:51.657 ], 00:08:51.657 "product_name": "Raid Volume", 00:08:51.657 "block_size": 512, 00:08:51.657 "num_blocks": 63488, 00:08:51.657 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:51.657 "assigned_rate_limits": { 00:08:51.657 "rw_ios_per_sec": 0, 00:08:51.657 "rw_mbytes_per_sec": 0, 00:08:51.657 "r_mbytes_per_sec": 0, 00:08:51.657 "w_mbytes_per_sec": 0 00:08:51.657 }, 00:08:51.657 "claimed": false, 00:08:51.657 "zoned": false, 00:08:51.657 "supported_io_types": { 00:08:51.657 "read": true, 00:08:51.657 "write": true, 00:08:51.657 "unmap": false, 00:08:51.657 "flush": false, 00:08:51.657 "reset": true, 00:08:51.657 "nvme_admin": false, 00:08:51.657 "nvme_io": false, 00:08:51.657 "nvme_io_md": false, 00:08:51.657 "write_zeroes": true, 00:08:51.657 "zcopy": false, 00:08:51.657 "get_zone_info": false, 00:08:51.657 "zone_management": false, 00:08:51.657 "zone_append": false, 00:08:51.657 "compare": false, 00:08:51.657 "compare_and_write": false, 00:08:51.657 "abort": false, 00:08:51.657 "seek_hole": false, 00:08:51.657 "seek_data": false, 00:08:51.657 "copy": false, 00:08:51.657 "nvme_iov_md": false 00:08:51.657 }, 00:08:51.657 "memory_domains": [ 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "system", 00:08:51.657 "dma_device_type": 1 00:08:51.657 }, 00:08:51.657 { 00:08:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.657 "dma_device_type": 2 00:08:51.657 } 00:08:51.657 ], 00:08:51.657 "driver_specific": { 00:08:51.657 "raid": { 00:08:51.657 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:51.657 "strip_size_kb": 0, 00:08:51.657 "state": "online", 00:08:51.657 "raid_level": "raid1", 00:08:51.657 "superblock": true, 00:08:51.657 "num_base_bdevs": 2, 00:08:51.657 "num_base_bdevs_discovered": 2, 00:08:51.657 "num_base_bdevs_operational": 2, 00:08:51.657 "base_bdevs_list": [ 00:08:51.657 { 00:08:51.657 "name": "pt1", 00:08:51.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.658 "is_configured": true, 00:08:51.658 "data_offset": 2048, 00:08:51.658 "data_size": 63488 00:08:51.658 }, 00:08:51.658 { 00:08:51.658 "name": "pt2", 00:08:51.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.658 "is_configured": true, 00:08:51.658 "data_offset": 2048, 00:08:51.658 "data_size": 63488 00:08:51.658 } 00:08:51.658 ] 00:08:51.658 } 00:08:51.658 } 00:08:51.658 }' 00:08:51.658 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.658 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.658 pt2' 00:08:51.658 11:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.658 [2024-11-04 11:41:17.135379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a3b67c4-6b72-4205-853d-dd870e543b60 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a3b67c4-6b72-4205-853d-dd870e543b60 ']' 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.658 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.918 [2024-11-04 11:41:17.182936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.918 [2024-11-04 11:41:17.183031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.918 [2024-11-04 11:41:17.183168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.918 [2024-11-04 11:41:17.183286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.918 [2024-11-04 11:41:17.183351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.918 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 [2024-11-04 11:41:17.322760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.919 [2024-11-04 11:41:17.324999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.919 [2024-11-04 11:41:17.325158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.919 [2024-11-04 11:41:17.325355] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.919 [2024-11-04 11:41:17.325431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.919 [2024-11-04 11:41:17.325483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:51.919 request: 00:08:51.919 { 00:08:51.919 "name": "raid_bdev1", 00:08:51.919 "raid_level": "raid1", 00:08:51.919 "base_bdevs": [ 00:08:51.919 "malloc1", 00:08:51.919 "malloc2" 00:08:51.919 ], 00:08:51.919 "superblock": false, 00:08:51.919 "method": "bdev_raid_create", 00:08:51.919 "req_id": 1 00:08:51.919 } 00:08:51.919 Got JSON-RPC error response 00:08:51.919 response: 00:08:51.919 { 00:08:51.919 "code": -17, 00:08:51.919 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.919 } 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 [2024-11-04 11:41:17.386602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.919 [2024-11-04 11:41:17.386680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.919 [2024-11-04 11:41:17.386700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.919 [2024-11-04 11:41:17.386711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.919 [2024-11-04 11:41:17.389108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.919 [2024-11-04 11:41:17.389159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.919 [2024-11-04 11:41:17.389266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.919 [2024-11-04 11:41:17.389338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.919 pt1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.919 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.179 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.179 "name": "raid_bdev1", 00:08:52.179 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:52.179 "strip_size_kb": 0, 00:08:52.179 "state": "configuring", 00:08:52.179 "raid_level": "raid1", 00:08:52.179 "superblock": true, 00:08:52.179 "num_base_bdevs": 2, 00:08:52.179 "num_base_bdevs_discovered": 1, 00:08:52.179 "num_base_bdevs_operational": 2, 00:08:52.179 "base_bdevs_list": [ 00:08:52.179 { 00:08:52.179 "name": "pt1", 00:08:52.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.179 "is_configured": true, 00:08:52.179 "data_offset": 2048, 00:08:52.179 "data_size": 63488 00:08:52.179 }, 00:08:52.179 { 00:08:52.179 "name": null, 00:08:52.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.179 "is_configured": false, 00:08:52.179 "data_offset": 2048, 00:08:52.179 "data_size": 63488 00:08:52.179 } 00:08:52.179 ] 00:08:52.179 }' 00:08:52.179 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.180 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.439 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.439 [2024-11-04 11:41:17.901742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.439 [2024-11-04 11:41:17.901914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.439 [2024-11-04 11:41:17.901965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:52.439 [2024-11-04 11:41:17.902029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.439 [2024-11-04 11:41:17.902626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.439 [2024-11-04 11:41:17.902729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.440 [2024-11-04 11:41:17.902883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.440 [2024-11-04 11:41:17.902960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.440 [2024-11-04 11:41:17.903150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:52.440 [2024-11-04 11:41:17.903200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.440 [2024-11-04 11:41:17.903543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.440 [2024-11-04 11:41:17.903794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:52.440 [2024-11-04 11:41:17.903841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:52.440 [2024-11-04 11:41:17.904106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.440 pt2 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.440 "name": "raid_bdev1", 00:08:52.440 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:52.440 "strip_size_kb": 0, 00:08:52.440 "state": "online", 00:08:52.440 "raid_level": "raid1", 00:08:52.440 "superblock": true, 00:08:52.440 "num_base_bdevs": 2, 00:08:52.440 "num_base_bdevs_discovered": 2, 00:08:52.440 "num_base_bdevs_operational": 2, 00:08:52.440 "base_bdevs_list": [ 00:08:52.440 { 00:08:52.440 "name": "pt1", 00:08:52.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.440 "is_configured": true, 00:08:52.440 "data_offset": 2048, 00:08:52.440 "data_size": 63488 00:08:52.440 }, 00:08:52.440 { 00:08:52.440 "name": "pt2", 00:08:52.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.440 "is_configured": true, 00:08:52.440 "data_offset": 2048, 00:08:52.440 "data_size": 63488 00:08:52.440 } 00:08:52.440 ] 00:08:52.440 }' 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.440 11:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.009 [2024-11-04 11:41:18.405192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.009 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.009 "name": "raid_bdev1", 00:08:53.009 "aliases": [ 00:08:53.009 "3a3b67c4-6b72-4205-853d-dd870e543b60" 00:08:53.009 ], 00:08:53.009 "product_name": "Raid Volume", 00:08:53.009 "block_size": 512, 00:08:53.009 "num_blocks": 63488, 00:08:53.009 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:53.009 "assigned_rate_limits": { 00:08:53.009 "rw_ios_per_sec": 0, 00:08:53.009 "rw_mbytes_per_sec": 0, 00:08:53.009 "r_mbytes_per_sec": 0, 00:08:53.009 "w_mbytes_per_sec": 0 00:08:53.009 }, 00:08:53.009 "claimed": false, 00:08:53.009 "zoned": false, 00:08:53.009 "supported_io_types": { 00:08:53.009 "read": true, 00:08:53.009 "write": true, 00:08:53.009 "unmap": false, 00:08:53.009 "flush": false, 00:08:53.009 "reset": true, 00:08:53.010 "nvme_admin": false, 00:08:53.010 "nvme_io": false, 00:08:53.010 "nvme_io_md": false, 00:08:53.010 "write_zeroes": true, 00:08:53.010 "zcopy": false, 00:08:53.010 "get_zone_info": false, 00:08:53.010 "zone_management": false, 00:08:53.010 "zone_append": false, 00:08:53.010 "compare": false, 00:08:53.010 "compare_and_write": false, 00:08:53.010 "abort": false, 00:08:53.010 "seek_hole": false, 00:08:53.010 "seek_data": false, 00:08:53.010 "copy": false, 00:08:53.010 "nvme_iov_md": false 00:08:53.010 }, 00:08:53.010 "memory_domains": [ 00:08:53.010 { 00:08:53.010 "dma_device_id": "system", 00:08:53.010 "dma_device_type": 1 00:08:53.010 }, 00:08:53.010 { 00:08:53.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.010 "dma_device_type": 2 00:08:53.010 }, 00:08:53.010 { 00:08:53.010 "dma_device_id": "system", 00:08:53.010 "dma_device_type": 1 00:08:53.010 }, 00:08:53.010 { 00:08:53.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.010 "dma_device_type": 2 00:08:53.010 } 00:08:53.010 ], 00:08:53.010 "driver_specific": { 00:08:53.010 "raid": { 00:08:53.010 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:53.010 "strip_size_kb": 0, 00:08:53.010 "state": "online", 00:08:53.010 "raid_level": "raid1", 00:08:53.010 "superblock": true, 00:08:53.010 "num_base_bdevs": 2, 00:08:53.010 "num_base_bdevs_discovered": 2, 00:08:53.010 "num_base_bdevs_operational": 2, 00:08:53.010 "base_bdevs_list": [ 00:08:53.010 { 00:08:53.010 "name": "pt1", 00:08:53.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.010 "is_configured": true, 00:08:53.010 "data_offset": 2048, 00:08:53.010 "data_size": 63488 00:08:53.010 }, 00:08:53.010 { 00:08:53.010 "name": "pt2", 00:08:53.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.010 "is_configured": true, 00:08:53.010 "data_offset": 2048, 00:08:53.010 "data_size": 63488 00:08:53.010 } 00:08:53.010 ] 00:08:53.010 } 00:08:53.010 } 00:08:53.010 }' 00:08:53.010 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.010 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.010 pt2' 00:08:53.010 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.270 [2024-11-04 11:41:18.644809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a3b67c4-6b72-4205-853d-dd870e543b60 '!=' 3a3b67c4-6b72-4205-853d-dd870e543b60 ']' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.270 [2024-11-04 11:41:18.672532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.270 "name": "raid_bdev1", 00:08:53.270 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:53.270 "strip_size_kb": 0, 00:08:53.270 "state": "online", 00:08:53.270 "raid_level": "raid1", 00:08:53.270 "superblock": true, 00:08:53.270 "num_base_bdevs": 2, 00:08:53.270 "num_base_bdevs_discovered": 1, 00:08:53.270 "num_base_bdevs_operational": 1, 00:08:53.270 "base_bdevs_list": [ 00:08:53.270 { 00:08:53.270 "name": null, 00:08:53.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.270 "is_configured": false, 00:08:53.270 "data_offset": 0, 00:08:53.270 "data_size": 63488 00:08:53.270 }, 00:08:53.270 { 00:08:53.270 "name": "pt2", 00:08:53.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.270 "is_configured": true, 00:08:53.270 "data_offset": 2048, 00:08:53.270 "data_size": 63488 00:08:53.270 } 00:08:53.270 ] 00:08:53.270 }' 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.270 11:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.835 [2024-11-04 11:41:19.143679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.835 [2024-11-04 11:41:19.143782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.835 [2024-11-04 11:41:19.143914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.835 [2024-11-04 11:41:19.144031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.835 [2024-11-04 11:41:19.144113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:53.835 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 [2024-11-04 11:41:19.199565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.836 [2024-11-04 11:41:19.199637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.836 [2024-11-04 11:41:19.199657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.836 [2024-11-04 11:41:19.199668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.836 [2024-11-04 11:41:19.201996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.836 [2024-11-04 11:41:19.202079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.836 [2024-11-04 11:41:19.202172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.836 [2024-11-04 11:41:19.202228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.836 [2024-11-04 11:41:19.202337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.836 [2024-11-04 11:41:19.202351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.836 [2024-11-04 11:41:19.202603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:53.836 [2024-11-04 11:41:19.202759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.836 [2024-11-04 11:41:19.202769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:53.836 [2024-11-04 11:41:19.202916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.836 pt2 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.836 "name": "raid_bdev1", 00:08:53.836 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:53.836 "strip_size_kb": 0, 00:08:53.836 "state": "online", 00:08:53.836 "raid_level": "raid1", 00:08:53.836 "superblock": true, 00:08:53.836 "num_base_bdevs": 2, 00:08:53.836 "num_base_bdevs_discovered": 1, 00:08:53.836 "num_base_bdevs_operational": 1, 00:08:53.836 "base_bdevs_list": [ 00:08:53.836 { 00:08:53.836 "name": null, 00:08:53.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.836 "is_configured": false, 00:08:53.836 "data_offset": 2048, 00:08:53.836 "data_size": 63488 00:08:53.836 }, 00:08:53.836 { 00:08:53.836 "name": "pt2", 00:08:53.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.836 "is_configured": true, 00:08:53.836 "data_offset": 2048, 00:08:53.836 "data_size": 63488 00:08:53.836 } 00:08:53.836 ] 00:08:53.836 }' 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.836 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.400 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.400 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.401 [2024-11-04 11:41:19.682741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.401 [2024-11-04 11:41:19.682775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.401 [2024-11-04 11:41:19.682855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.401 [2024-11-04 11:41:19.682907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.401 [2024-11-04 11:41:19.682916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.401 [2024-11-04 11:41:19.746699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.401 [2024-11-04 11:41:19.746834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.401 [2024-11-04 11:41:19.746915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:54.401 [2024-11-04 11:41:19.746972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.401 [2024-11-04 11:41:19.749623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.401 [2024-11-04 11:41:19.749718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.401 [2024-11-04 11:41:19.749870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.401 [2024-11-04 11:41:19.749972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.401 [2024-11-04 11:41:19.750172] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:54.401 [2024-11-04 11:41:19.750233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.401 [2024-11-04 11:41:19.750326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:54.401 [2024-11-04 11:41:19.750470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.401 [2024-11-04 11:41:19.750616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:54.401 [2024-11-04 11:41:19.750658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.401 [2024-11-04 11:41:19.750982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:54.401 [2024-11-04 11:41:19.751203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:54.401 [2024-11-04 11:41:19.751256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:54.401 [2024-11-04 11:41:19.751540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.401 pt1 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.401 "name": "raid_bdev1", 00:08:54.401 "uuid": "3a3b67c4-6b72-4205-853d-dd870e543b60", 00:08:54.401 "strip_size_kb": 0, 00:08:54.401 "state": "online", 00:08:54.401 "raid_level": "raid1", 00:08:54.401 "superblock": true, 00:08:54.401 "num_base_bdevs": 2, 00:08:54.401 "num_base_bdevs_discovered": 1, 00:08:54.401 "num_base_bdevs_operational": 1, 00:08:54.401 "base_bdevs_list": [ 00:08:54.401 { 00:08:54.401 "name": null, 00:08:54.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.401 "is_configured": false, 00:08:54.401 "data_offset": 2048, 00:08:54.401 "data_size": 63488 00:08:54.401 }, 00:08:54.401 { 00:08:54.401 "name": "pt2", 00:08:54.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.401 "is_configured": true, 00:08:54.401 "data_offset": 2048, 00:08:54.401 "data_size": 63488 00:08:54.401 } 00:08:54.401 ] 00:08:54.401 }' 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.401 11:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.968 [2024-11-04 11:41:20.242197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3a3b67c4-6b72-4205-853d-dd870e543b60 '!=' 3a3b67c4-6b72-4205-853d-dd870e543b60 ']' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63411 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63411 ']' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63411 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63411 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:54.968 killing process with pid 63411 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63411' 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63411 00:08:54.968 [2024-11-04 11:41:20.329570] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.968 [2024-11-04 11:41:20.329680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.968 [2024-11-04 11:41:20.329730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.968 [2024-11-04 11:41:20.329745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:54.968 11:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63411 00:08:55.228 [2024-11-04 11:41:20.542381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.620 11:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:56.620 00:08:56.620 real 0m6.387s 00:08:56.620 user 0m9.672s 00:08:56.620 sys 0m1.132s 00:08:56.620 11:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.620 11:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.620 ************************************ 00:08:56.620 END TEST raid_superblock_test 00:08:56.620 ************************************ 00:08:56.620 11:41:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:56.620 11:41:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:56.621 11:41:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.621 11:41:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.621 ************************************ 00:08:56.621 START TEST raid_read_error_test 00:08:56.621 ************************************ 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cbYEdVTt9n 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63741 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63741 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63741 ']' 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.621 11:41:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.621 [2024-11-04 11:41:21.871204] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:08:56.621 [2024-11-04 11:41:21.871427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63741 ] 00:08:56.621 [2024-11-04 11:41:22.052092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.880 [2024-11-04 11:41:22.171113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.880 [2024-11-04 11:41:22.380493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.880 [2024-11-04 11:41:22.380524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 BaseBdev1_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 true 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 [2024-11-04 11:41:22.796389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.449 [2024-11-04 11:41:22.796526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.449 [2024-11-04 11:41:22.796587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:57.449 [2024-11-04 11:41:22.796624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.449 [2024-11-04 11:41:22.798933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.449 [2024-11-04 11:41:22.799014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.449 BaseBdev1 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 BaseBdev2_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 true 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.449 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.449 [2024-11-04 11:41:22.866040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.449 [2024-11-04 11:41:22.866104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.449 [2024-11-04 11:41:22.866124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:57.449 [2024-11-04 11:41:22.866135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.450 [2024-11-04 11:41:22.868503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.450 [2024-11-04 11:41:22.868618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.450 BaseBdev2 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 [2024-11-04 11:41:22.878076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.450 [2024-11-04 11:41:22.880052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.450 [2024-11-04 11:41:22.880301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.450 [2024-11-04 11:41:22.880318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.450 [2024-11-04 11:41:22.880593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:57.450 [2024-11-04 11:41:22.880807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.450 [2024-11-04 11:41:22.880819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:57.450 [2024-11-04 11:41:22.881006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.450 "name": "raid_bdev1", 00:08:57.450 "uuid": "fd9b1219-f4ec-4cb4-ace9-92016c9739d7", 00:08:57.450 "strip_size_kb": 0, 00:08:57.450 "state": "online", 00:08:57.450 "raid_level": "raid1", 00:08:57.450 "superblock": true, 00:08:57.450 "num_base_bdevs": 2, 00:08:57.450 "num_base_bdevs_discovered": 2, 00:08:57.450 "num_base_bdevs_operational": 2, 00:08:57.450 "base_bdevs_list": [ 00:08:57.450 { 00:08:57.450 "name": "BaseBdev1", 00:08:57.450 "uuid": "29d1de2f-7893-5fb8-98c9-bb7aabbcb3aa", 00:08:57.450 "is_configured": true, 00:08:57.450 "data_offset": 2048, 00:08:57.450 "data_size": 63488 00:08:57.450 }, 00:08:57.450 { 00:08:57.450 "name": "BaseBdev2", 00:08:57.450 "uuid": "f2116b3d-35ac-5e4d-a4ca-9972e83dda2e", 00:08:57.450 "is_configured": true, 00:08:57.450 "data_offset": 2048, 00:08:57.450 "data_size": 63488 00:08:57.450 } 00:08:57.450 ] 00:08:57.450 }' 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.450 11:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.018 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.018 11:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.018 [2024-11-04 11:41:23.438420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.956 "name": "raid_bdev1", 00:08:58.956 "uuid": "fd9b1219-f4ec-4cb4-ace9-92016c9739d7", 00:08:58.956 "strip_size_kb": 0, 00:08:58.956 "state": "online", 00:08:58.956 "raid_level": "raid1", 00:08:58.956 "superblock": true, 00:08:58.956 "num_base_bdevs": 2, 00:08:58.956 "num_base_bdevs_discovered": 2, 00:08:58.956 "num_base_bdevs_operational": 2, 00:08:58.956 "base_bdevs_list": [ 00:08:58.956 { 00:08:58.956 "name": "BaseBdev1", 00:08:58.956 "uuid": "29d1de2f-7893-5fb8-98c9-bb7aabbcb3aa", 00:08:58.956 "is_configured": true, 00:08:58.956 "data_offset": 2048, 00:08:58.956 "data_size": 63488 00:08:58.956 }, 00:08:58.956 { 00:08:58.956 "name": "BaseBdev2", 00:08:58.956 "uuid": "f2116b3d-35ac-5e4d-a4ca-9972e83dda2e", 00:08:58.956 "is_configured": true, 00:08:58.956 "data_offset": 2048, 00:08:58.956 "data_size": 63488 00:08:58.956 } 00:08:58.956 ] 00:08:58.956 }' 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.956 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.527 [2024-11-04 11:41:24.810257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.527 [2024-11-04 11:41:24.810295] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.527 [2024-11-04 11:41:24.813193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.527 [2024-11-04 11:41:24.813310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.527 [2024-11-04 11:41:24.813453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.527 [2024-11-04 11:41:24.813468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:59.527 { 00:08:59.527 "results": [ 00:08:59.527 { 00:08:59.527 "job": "raid_bdev1", 00:08:59.527 "core_mask": "0x1", 00:08:59.527 "workload": "randrw", 00:08:59.527 "percentage": 50, 00:08:59.527 "status": "finished", 00:08:59.527 "queue_depth": 1, 00:08:59.527 "io_size": 131072, 00:08:59.527 "runtime": 1.372668, 00:08:59.527 "iops": 16930.532364708728, 00:08:59.527 "mibps": 2116.316545588591, 00:08:59.527 "io_failed": 0, 00:08:59.527 "io_timeout": 0, 00:08:59.527 "avg_latency_us": 56.334536599298005, 00:08:59.527 "min_latency_us": 23.811353711790392, 00:08:59.527 "max_latency_us": 1466.6899563318777 00:08:59.527 } 00:08:59.527 ], 00:08:59.527 "core_count": 1 00:08:59.527 } 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63741 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63741 ']' 00:08:59.527 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63741 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63741 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63741' 00:08:59.528 killing process with pid 63741 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63741 00:08:59.528 [2024-11-04 11:41:24.857967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.528 11:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63741 00:08:59.528 [2024-11-04 11:41:24.995046] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cbYEdVTt9n 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:01.004 11:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:01.004 00:09:01.004 real 0m4.462s 00:09:01.005 user 0m5.389s 00:09:01.005 sys 0m0.527s 00:09:01.005 11:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.005 ************************************ 00:09:01.005 END TEST raid_read_error_test 00:09:01.005 ************************************ 00:09:01.005 11:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.005 11:41:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:01.005 11:41:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:01.005 11:41:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.005 11:41:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.005 ************************************ 00:09:01.005 START TEST raid_write_error_test 00:09:01.005 ************************************ 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.drskd6EgJC 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63881 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63881 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63881 ']' 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.005 11:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.005 [2024-11-04 11:41:26.400885] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:01.005 [2024-11-04 11:41:26.401103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63881 ] 00:09:01.264 [2024-11-04 11:41:26.556529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.264 [2024-11-04 11:41:26.684588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.523 [2024-11-04 11:41:26.909772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.523 [2024-11-04 11:41:26.909931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 BaseBdev1_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 true 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 [2024-11-04 11:41:27.380593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.090 [2024-11-04 11:41:27.380658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.090 [2024-11-04 11:41:27.380680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.090 [2024-11-04 11:41:27.380692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.090 [2024-11-04 11:41:27.383096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.090 [2024-11-04 11:41:27.383142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.090 BaseBdev1 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 BaseBdev2_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 true 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 [2024-11-04 11:41:27.446756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.090 [2024-11-04 11:41:27.446814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.090 [2024-11-04 11:41:27.446832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.090 [2024-11-04 11:41:27.446843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.090 [2024-11-04 11:41:27.449106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.090 [2024-11-04 11:41:27.449217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.090 BaseBdev2 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.090 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.090 [2024-11-04 11:41:27.458811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.091 [2024-11-04 11:41:27.460965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.091 [2024-11-04 11:41:27.461200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.091 [2024-11-04 11:41:27.461218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.091 [2024-11-04 11:41:27.461532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.091 [2024-11-04 11:41:27.461753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.091 [2024-11-04 11:41:27.461771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.091 [2024-11-04 11:41:27.462004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.091 "name": "raid_bdev1", 00:09:02.091 "uuid": "da76528c-0748-4ec2-8d71-805e74950031", 00:09:02.091 "strip_size_kb": 0, 00:09:02.091 "state": "online", 00:09:02.091 "raid_level": "raid1", 00:09:02.091 "superblock": true, 00:09:02.091 "num_base_bdevs": 2, 00:09:02.091 "num_base_bdevs_discovered": 2, 00:09:02.091 "num_base_bdevs_operational": 2, 00:09:02.091 "base_bdevs_list": [ 00:09:02.091 { 00:09:02.091 "name": "BaseBdev1", 00:09:02.091 "uuid": "482ce74e-31c4-52e9-adea-42cfcaec0409", 00:09:02.091 "is_configured": true, 00:09:02.091 "data_offset": 2048, 00:09:02.091 "data_size": 63488 00:09:02.091 }, 00:09:02.091 { 00:09:02.091 "name": "BaseBdev2", 00:09:02.091 "uuid": "4ee217c4-879f-5853-bfed-c1df5b200aa8", 00:09:02.091 "is_configured": true, 00:09:02.091 "data_offset": 2048, 00:09:02.091 "data_size": 63488 00:09:02.091 } 00:09:02.091 ] 00:09:02.091 }' 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.091 11:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.659 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.659 11:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.659 [2024-11-04 11:41:27.991431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.595 [2024-11-04 11:41:28.915707] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:03.595 [2024-11-04 11:41:28.915866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.595 [2024-11-04 11:41:28.916147] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.595 "name": "raid_bdev1", 00:09:03.595 "uuid": "da76528c-0748-4ec2-8d71-805e74950031", 00:09:03.595 "strip_size_kb": 0, 00:09:03.595 "state": "online", 00:09:03.595 "raid_level": "raid1", 00:09:03.595 "superblock": true, 00:09:03.595 "num_base_bdevs": 2, 00:09:03.595 "num_base_bdevs_discovered": 1, 00:09:03.595 "num_base_bdevs_operational": 1, 00:09:03.595 "base_bdevs_list": [ 00:09:03.595 { 00:09:03.595 "name": null, 00:09:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.595 "is_configured": false, 00:09:03.595 "data_offset": 0, 00:09:03.595 "data_size": 63488 00:09:03.595 }, 00:09:03.595 { 00:09:03.595 "name": "BaseBdev2", 00:09:03.595 "uuid": "4ee217c4-879f-5853-bfed-c1df5b200aa8", 00:09:03.595 "is_configured": true, 00:09:03.595 "data_offset": 2048, 00:09:03.595 "data_size": 63488 00:09:03.595 } 00:09:03.595 ] 00:09:03.595 }' 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.595 11:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.854 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.854 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.854 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.113 [2024-11-04 11:41:29.381033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.113 [2024-11-04 11:41:29.381148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.113 [2024-11-04 11:41:29.383960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.113 [2024-11-04 11:41:29.383996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.113 [2024-11-04 11:41:29.384051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.113 [2024-11-04 11:41:29.384073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:04.113 { 00:09:04.113 "results": [ 00:09:04.113 { 00:09:04.113 "job": "raid_bdev1", 00:09:04.113 "core_mask": "0x1", 00:09:04.113 "workload": "randrw", 00:09:04.113 "percentage": 50, 00:09:04.113 "status": "finished", 00:09:04.113 "queue_depth": 1, 00:09:04.113 "io_size": 131072, 00:09:04.113 "runtime": 1.390176, 00:09:04.113 "iops": 19285.327900927652, 00:09:04.113 "mibps": 2410.6659876159565, 00:09:04.113 "io_failed": 0, 00:09:04.113 "io_timeout": 0, 00:09:04.113 "avg_latency_us": 49.05038666078127, 00:09:04.113 "min_latency_us": 22.581659388646287, 00:09:04.113 "max_latency_us": 1702.7912663755458 00:09:04.113 } 00:09:04.113 ], 00:09:04.113 "core_count": 1 00:09:04.113 } 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63881 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63881 ']' 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63881 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63881 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63881' 00:09:04.113 killing process with pid 63881 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63881 00:09:04.113 [2024-11-04 11:41:29.434050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.113 11:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63881 00:09:04.113 [2024-11-04 11:41:29.574838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.drskd6EgJC 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:05.486 00:09:05.486 real 0m4.488s 00:09:05.486 user 0m5.440s 00:09:05.486 sys 0m0.519s 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.486 11:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.486 ************************************ 00:09:05.486 END TEST raid_write_error_test 00:09:05.486 ************************************ 00:09:05.486 11:41:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:05.486 11:41:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:05.486 11:41:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:05.486 11:41:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:05.486 11:41:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.486 11:41:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.486 ************************************ 00:09:05.486 START TEST raid_state_function_test 00:09:05.486 ************************************ 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:05.486 Process raid pid: 64025 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64025 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64025' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64025 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64025 ']' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:05.486 11:41:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.486 [2024-11-04 11:41:30.950735] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:05.486 [2024-11-04 11:41:30.950942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.744 [2024-11-04 11:41:31.123401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.744 [2024-11-04 11:41:31.245054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.003 [2024-11-04 11:41:31.463255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.003 [2024-11-04 11:41:31.463305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.582 [2024-11-04 11:41:31.837760] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.582 [2024-11-04 11:41:31.837826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.582 [2024-11-04 11:41:31.837839] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.582 [2024-11-04 11:41:31.837851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.582 [2024-11-04 11:41:31.837859] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.582 [2024-11-04 11:41:31.837869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.582 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.582 "name": "Existed_Raid", 00:09:06.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.582 "strip_size_kb": 64, 00:09:06.582 "state": "configuring", 00:09:06.582 "raid_level": "raid0", 00:09:06.582 "superblock": false, 00:09:06.583 "num_base_bdevs": 3, 00:09:06.583 "num_base_bdevs_discovered": 0, 00:09:06.583 "num_base_bdevs_operational": 3, 00:09:06.583 "base_bdevs_list": [ 00:09:06.583 { 00:09:06.583 "name": "BaseBdev1", 00:09:06.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.583 "is_configured": false, 00:09:06.583 "data_offset": 0, 00:09:06.583 "data_size": 0 00:09:06.583 }, 00:09:06.583 { 00:09:06.583 "name": "BaseBdev2", 00:09:06.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.583 "is_configured": false, 00:09:06.583 "data_offset": 0, 00:09:06.583 "data_size": 0 00:09:06.583 }, 00:09:06.583 { 00:09:06.583 "name": "BaseBdev3", 00:09:06.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.583 "is_configured": false, 00:09:06.583 "data_offset": 0, 00:09:06.583 "data_size": 0 00:09:06.583 } 00:09:06.583 ] 00:09:06.583 }' 00:09:06.583 11:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.583 11:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.841 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.841 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 [2024-11-04 11:41:32.288972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.842 [2024-11-04 11:41:32.289085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 [2024-11-04 11:41:32.300950] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.842 [2024-11-04 11:41:32.301065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.842 [2024-11-04 11:41:32.301105] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.842 [2024-11-04 11:41:32.301149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.842 [2024-11-04 11:41:32.301186] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.842 [2024-11-04 11:41:32.301248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 [2024-11-04 11:41:32.351126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.842 BaseBdev1 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.842 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 [ 00:09:07.101 { 00:09:07.101 "name": "BaseBdev1", 00:09:07.101 "aliases": [ 00:09:07.101 "982e8556-33f9-42a4-9d6e-27cee5177270" 00:09:07.101 ], 00:09:07.101 "product_name": "Malloc disk", 00:09:07.101 "block_size": 512, 00:09:07.101 "num_blocks": 65536, 00:09:07.101 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:07.101 "assigned_rate_limits": { 00:09:07.101 "rw_ios_per_sec": 0, 00:09:07.101 "rw_mbytes_per_sec": 0, 00:09:07.101 "r_mbytes_per_sec": 0, 00:09:07.101 "w_mbytes_per_sec": 0 00:09:07.101 }, 00:09:07.101 "claimed": true, 00:09:07.101 "claim_type": "exclusive_write", 00:09:07.101 "zoned": false, 00:09:07.101 "supported_io_types": { 00:09:07.101 "read": true, 00:09:07.101 "write": true, 00:09:07.101 "unmap": true, 00:09:07.101 "flush": true, 00:09:07.101 "reset": true, 00:09:07.101 "nvme_admin": false, 00:09:07.101 "nvme_io": false, 00:09:07.101 "nvme_io_md": false, 00:09:07.101 "write_zeroes": true, 00:09:07.101 "zcopy": true, 00:09:07.101 "get_zone_info": false, 00:09:07.101 "zone_management": false, 00:09:07.101 "zone_append": false, 00:09:07.101 "compare": false, 00:09:07.101 "compare_and_write": false, 00:09:07.101 "abort": true, 00:09:07.101 "seek_hole": false, 00:09:07.101 "seek_data": false, 00:09:07.101 "copy": true, 00:09:07.101 "nvme_iov_md": false 00:09:07.101 }, 00:09:07.101 "memory_domains": [ 00:09:07.101 { 00:09:07.101 "dma_device_id": "system", 00:09:07.101 "dma_device_type": 1 00:09:07.101 }, 00:09:07.101 { 00:09:07.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.101 "dma_device_type": 2 00:09:07.101 } 00:09:07.101 ], 00:09:07.101 "driver_specific": {} 00:09:07.101 } 00:09:07.101 ] 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.101 "name": "Existed_Raid", 00:09:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.101 "strip_size_kb": 64, 00:09:07.101 "state": "configuring", 00:09:07.101 "raid_level": "raid0", 00:09:07.101 "superblock": false, 00:09:07.101 "num_base_bdevs": 3, 00:09:07.101 "num_base_bdevs_discovered": 1, 00:09:07.101 "num_base_bdevs_operational": 3, 00:09:07.101 "base_bdevs_list": [ 00:09:07.101 { 00:09:07.101 "name": "BaseBdev1", 00:09:07.101 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:07.101 "is_configured": true, 00:09:07.101 "data_offset": 0, 00:09:07.101 "data_size": 65536 00:09:07.101 }, 00:09:07.101 { 00:09:07.101 "name": "BaseBdev2", 00:09:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.101 "is_configured": false, 00:09:07.101 "data_offset": 0, 00:09:07.101 "data_size": 0 00:09:07.101 }, 00:09:07.101 { 00:09:07.101 "name": "BaseBdev3", 00:09:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.101 "is_configured": false, 00:09:07.101 "data_offset": 0, 00:09:07.101 "data_size": 0 00:09:07.101 } 00:09:07.101 ] 00:09:07.101 }' 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.101 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.361 [2024-11-04 11:41:32.798458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.361 [2024-11-04 11:41:32.798579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.361 [2024-11-04 11:41:32.806493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.361 [2024-11-04 11:41:32.808598] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.361 [2024-11-04 11:41:32.808694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.361 [2024-11-04 11:41:32.808711] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.361 [2024-11-04 11:41:32.808723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.361 "name": "Existed_Raid", 00:09:07.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.361 "strip_size_kb": 64, 00:09:07.361 "state": "configuring", 00:09:07.361 "raid_level": "raid0", 00:09:07.361 "superblock": false, 00:09:07.361 "num_base_bdevs": 3, 00:09:07.361 "num_base_bdevs_discovered": 1, 00:09:07.361 "num_base_bdevs_operational": 3, 00:09:07.361 "base_bdevs_list": [ 00:09:07.361 { 00:09:07.361 "name": "BaseBdev1", 00:09:07.361 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:07.361 "is_configured": true, 00:09:07.361 "data_offset": 0, 00:09:07.361 "data_size": 65536 00:09:07.361 }, 00:09:07.361 { 00:09:07.361 "name": "BaseBdev2", 00:09:07.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.361 "is_configured": false, 00:09:07.361 "data_offset": 0, 00:09:07.361 "data_size": 0 00:09:07.361 }, 00:09:07.361 { 00:09:07.361 "name": "BaseBdev3", 00:09:07.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.361 "is_configured": false, 00:09:07.361 "data_offset": 0, 00:09:07.361 "data_size": 0 00:09:07.361 } 00:09:07.361 ] 00:09:07.361 }' 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.361 11:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.931 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.931 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.931 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.932 [2024-11-04 11:41:33.314050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.932 BaseBdev2 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.932 [ 00:09:07.932 { 00:09:07.932 "name": "BaseBdev2", 00:09:07.932 "aliases": [ 00:09:07.932 "1862259d-8b9b-4c9b-9696-acc9836d3984" 00:09:07.932 ], 00:09:07.932 "product_name": "Malloc disk", 00:09:07.932 "block_size": 512, 00:09:07.932 "num_blocks": 65536, 00:09:07.932 "uuid": "1862259d-8b9b-4c9b-9696-acc9836d3984", 00:09:07.932 "assigned_rate_limits": { 00:09:07.932 "rw_ios_per_sec": 0, 00:09:07.932 "rw_mbytes_per_sec": 0, 00:09:07.932 "r_mbytes_per_sec": 0, 00:09:07.932 "w_mbytes_per_sec": 0 00:09:07.932 }, 00:09:07.932 "claimed": true, 00:09:07.932 "claim_type": "exclusive_write", 00:09:07.932 "zoned": false, 00:09:07.932 "supported_io_types": { 00:09:07.932 "read": true, 00:09:07.932 "write": true, 00:09:07.932 "unmap": true, 00:09:07.932 "flush": true, 00:09:07.932 "reset": true, 00:09:07.932 "nvme_admin": false, 00:09:07.932 "nvme_io": false, 00:09:07.932 "nvme_io_md": false, 00:09:07.932 "write_zeroes": true, 00:09:07.932 "zcopy": true, 00:09:07.932 "get_zone_info": false, 00:09:07.932 "zone_management": false, 00:09:07.932 "zone_append": false, 00:09:07.932 "compare": false, 00:09:07.932 "compare_and_write": false, 00:09:07.932 "abort": true, 00:09:07.932 "seek_hole": false, 00:09:07.932 "seek_data": false, 00:09:07.932 "copy": true, 00:09:07.932 "nvme_iov_md": false 00:09:07.932 }, 00:09:07.932 "memory_domains": [ 00:09:07.932 { 00:09:07.932 "dma_device_id": "system", 00:09:07.932 "dma_device_type": 1 00:09:07.932 }, 00:09:07.932 { 00:09:07.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.932 "dma_device_type": 2 00:09:07.932 } 00:09:07.932 ], 00:09:07.932 "driver_specific": {} 00:09:07.932 } 00:09:07.932 ] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.932 "name": "Existed_Raid", 00:09:07.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.932 "strip_size_kb": 64, 00:09:07.932 "state": "configuring", 00:09:07.932 "raid_level": "raid0", 00:09:07.932 "superblock": false, 00:09:07.932 "num_base_bdevs": 3, 00:09:07.932 "num_base_bdevs_discovered": 2, 00:09:07.932 "num_base_bdevs_operational": 3, 00:09:07.932 "base_bdevs_list": [ 00:09:07.932 { 00:09:07.932 "name": "BaseBdev1", 00:09:07.932 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:07.932 "is_configured": true, 00:09:07.932 "data_offset": 0, 00:09:07.932 "data_size": 65536 00:09:07.932 }, 00:09:07.932 { 00:09:07.932 "name": "BaseBdev2", 00:09:07.932 "uuid": "1862259d-8b9b-4c9b-9696-acc9836d3984", 00:09:07.932 "is_configured": true, 00:09:07.932 "data_offset": 0, 00:09:07.932 "data_size": 65536 00:09:07.932 }, 00:09:07.932 { 00:09:07.932 "name": "BaseBdev3", 00:09:07.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.932 "is_configured": false, 00:09:07.932 "data_offset": 0, 00:09:07.932 "data_size": 0 00:09:07.932 } 00:09:07.932 ] 00:09:07.932 }' 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.932 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 [2024-11-04 11:41:33.861263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.502 [2024-11-04 11:41:33.861417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.502 [2024-11-04 11:41:33.861439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:08.502 [2024-11-04 11:41:33.861818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.502 [2024-11-04 11:41:33.861986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.502 [2024-11-04 11:41:33.861996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.502 [2024-11-04 11:41:33.862264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.502 BaseBdev3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 [ 00:09:08.502 { 00:09:08.502 "name": "BaseBdev3", 00:09:08.502 "aliases": [ 00:09:08.502 "9e48bf59-c79b-4ca4-81d9-924f0de1f89c" 00:09:08.502 ], 00:09:08.502 "product_name": "Malloc disk", 00:09:08.502 "block_size": 512, 00:09:08.502 "num_blocks": 65536, 00:09:08.502 "uuid": "9e48bf59-c79b-4ca4-81d9-924f0de1f89c", 00:09:08.502 "assigned_rate_limits": { 00:09:08.502 "rw_ios_per_sec": 0, 00:09:08.502 "rw_mbytes_per_sec": 0, 00:09:08.502 "r_mbytes_per_sec": 0, 00:09:08.502 "w_mbytes_per_sec": 0 00:09:08.502 }, 00:09:08.502 "claimed": true, 00:09:08.502 "claim_type": "exclusive_write", 00:09:08.502 "zoned": false, 00:09:08.502 "supported_io_types": { 00:09:08.502 "read": true, 00:09:08.502 "write": true, 00:09:08.502 "unmap": true, 00:09:08.502 "flush": true, 00:09:08.502 "reset": true, 00:09:08.502 "nvme_admin": false, 00:09:08.502 "nvme_io": false, 00:09:08.502 "nvme_io_md": false, 00:09:08.502 "write_zeroes": true, 00:09:08.502 "zcopy": true, 00:09:08.502 "get_zone_info": false, 00:09:08.502 "zone_management": false, 00:09:08.502 "zone_append": false, 00:09:08.502 "compare": false, 00:09:08.502 "compare_and_write": false, 00:09:08.502 "abort": true, 00:09:08.502 "seek_hole": false, 00:09:08.502 "seek_data": false, 00:09:08.502 "copy": true, 00:09:08.502 "nvme_iov_md": false 00:09:08.502 }, 00:09:08.502 "memory_domains": [ 00:09:08.502 { 00:09:08.502 "dma_device_id": "system", 00:09:08.502 "dma_device_type": 1 00:09:08.502 }, 00:09:08.502 { 00:09:08.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.502 "dma_device_type": 2 00:09:08.502 } 00:09:08.502 ], 00:09:08.502 "driver_specific": {} 00:09:08.502 } 00:09:08.502 ] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.502 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.502 "name": "Existed_Raid", 00:09:08.502 "uuid": "4e1352db-a7e7-4990-8d6c-485a25024c01", 00:09:08.502 "strip_size_kb": 64, 00:09:08.502 "state": "online", 00:09:08.502 "raid_level": "raid0", 00:09:08.502 "superblock": false, 00:09:08.502 "num_base_bdevs": 3, 00:09:08.502 "num_base_bdevs_discovered": 3, 00:09:08.502 "num_base_bdevs_operational": 3, 00:09:08.502 "base_bdevs_list": [ 00:09:08.502 { 00:09:08.502 "name": "BaseBdev1", 00:09:08.502 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:08.502 "is_configured": true, 00:09:08.502 "data_offset": 0, 00:09:08.502 "data_size": 65536 00:09:08.502 }, 00:09:08.502 { 00:09:08.502 "name": "BaseBdev2", 00:09:08.502 "uuid": "1862259d-8b9b-4c9b-9696-acc9836d3984", 00:09:08.502 "is_configured": true, 00:09:08.502 "data_offset": 0, 00:09:08.502 "data_size": 65536 00:09:08.502 }, 00:09:08.502 { 00:09:08.502 "name": "BaseBdev3", 00:09:08.502 "uuid": "9e48bf59-c79b-4ca4-81d9-924f0de1f89c", 00:09:08.502 "is_configured": true, 00:09:08.503 "data_offset": 0, 00:09:08.503 "data_size": 65536 00:09:08.503 } 00:09:08.503 ] 00:09:08.503 }' 00:09:08.503 11:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.503 11:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.071 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.071 [2024-11-04 11:41:34.432755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.072 "name": "Existed_Raid", 00:09:09.072 "aliases": [ 00:09:09.072 "4e1352db-a7e7-4990-8d6c-485a25024c01" 00:09:09.072 ], 00:09:09.072 "product_name": "Raid Volume", 00:09:09.072 "block_size": 512, 00:09:09.072 "num_blocks": 196608, 00:09:09.072 "uuid": "4e1352db-a7e7-4990-8d6c-485a25024c01", 00:09:09.072 "assigned_rate_limits": { 00:09:09.072 "rw_ios_per_sec": 0, 00:09:09.072 "rw_mbytes_per_sec": 0, 00:09:09.072 "r_mbytes_per_sec": 0, 00:09:09.072 "w_mbytes_per_sec": 0 00:09:09.072 }, 00:09:09.072 "claimed": false, 00:09:09.072 "zoned": false, 00:09:09.072 "supported_io_types": { 00:09:09.072 "read": true, 00:09:09.072 "write": true, 00:09:09.072 "unmap": true, 00:09:09.072 "flush": true, 00:09:09.072 "reset": true, 00:09:09.072 "nvme_admin": false, 00:09:09.072 "nvme_io": false, 00:09:09.072 "nvme_io_md": false, 00:09:09.072 "write_zeroes": true, 00:09:09.072 "zcopy": false, 00:09:09.072 "get_zone_info": false, 00:09:09.072 "zone_management": false, 00:09:09.072 "zone_append": false, 00:09:09.072 "compare": false, 00:09:09.072 "compare_and_write": false, 00:09:09.072 "abort": false, 00:09:09.072 "seek_hole": false, 00:09:09.072 "seek_data": false, 00:09:09.072 "copy": false, 00:09:09.072 "nvme_iov_md": false 00:09:09.072 }, 00:09:09.072 "memory_domains": [ 00:09:09.072 { 00:09:09.072 "dma_device_id": "system", 00:09:09.072 "dma_device_type": 1 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.072 "dma_device_type": 2 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "dma_device_id": "system", 00:09:09.072 "dma_device_type": 1 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.072 "dma_device_type": 2 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "dma_device_id": "system", 00:09:09.072 "dma_device_type": 1 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.072 "dma_device_type": 2 00:09:09.072 } 00:09:09.072 ], 00:09:09.072 "driver_specific": { 00:09:09.072 "raid": { 00:09:09.072 "uuid": "4e1352db-a7e7-4990-8d6c-485a25024c01", 00:09:09.072 "strip_size_kb": 64, 00:09:09.072 "state": "online", 00:09:09.072 "raid_level": "raid0", 00:09:09.072 "superblock": false, 00:09:09.072 "num_base_bdevs": 3, 00:09:09.072 "num_base_bdevs_discovered": 3, 00:09:09.072 "num_base_bdevs_operational": 3, 00:09:09.072 "base_bdevs_list": [ 00:09:09.072 { 00:09:09.072 "name": "BaseBdev1", 00:09:09.072 "uuid": "982e8556-33f9-42a4-9d6e-27cee5177270", 00:09:09.072 "is_configured": true, 00:09:09.072 "data_offset": 0, 00:09:09.072 "data_size": 65536 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "name": "BaseBdev2", 00:09:09.072 "uuid": "1862259d-8b9b-4c9b-9696-acc9836d3984", 00:09:09.072 "is_configured": true, 00:09:09.072 "data_offset": 0, 00:09:09.072 "data_size": 65536 00:09:09.072 }, 00:09:09.072 { 00:09:09.072 "name": "BaseBdev3", 00:09:09.072 "uuid": "9e48bf59-c79b-4ca4-81d9-924f0de1f89c", 00:09:09.072 "is_configured": true, 00:09:09.072 "data_offset": 0, 00:09:09.072 "data_size": 65536 00:09:09.072 } 00:09:09.072 ] 00:09:09.072 } 00:09:09.072 } 00:09:09.072 }' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.072 BaseBdev2 00:09:09.072 BaseBdev3' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.072 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.332 [2024-11-04 11:41:34.743914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.332 [2024-11-04 11:41:34.743986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.332 [2024-11-04 11:41:34.744062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.332 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.591 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.592 "name": "Existed_Raid", 00:09:09.592 "uuid": "4e1352db-a7e7-4990-8d6c-485a25024c01", 00:09:09.592 "strip_size_kb": 64, 00:09:09.592 "state": "offline", 00:09:09.592 "raid_level": "raid0", 00:09:09.592 "superblock": false, 00:09:09.592 "num_base_bdevs": 3, 00:09:09.592 "num_base_bdevs_discovered": 2, 00:09:09.592 "num_base_bdevs_operational": 2, 00:09:09.592 "base_bdevs_list": [ 00:09:09.592 { 00:09:09.592 "name": null, 00:09:09.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.592 "is_configured": false, 00:09:09.592 "data_offset": 0, 00:09:09.592 "data_size": 65536 00:09:09.592 }, 00:09:09.592 { 00:09:09.592 "name": "BaseBdev2", 00:09:09.592 "uuid": "1862259d-8b9b-4c9b-9696-acc9836d3984", 00:09:09.592 "is_configured": true, 00:09:09.592 "data_offset": 0, 00:09:09.592 "data_size": 65536 00:09:09.592 }, 00:09:09.592 { 00:09:09.592 "name": "BaseBdev3", 00:09:09.592 "uuid": "9e48bf59-c79b-4ca4-81d9-924f0de1f89c", 00:09:09.592 "is_configured": true, 00:09:09.592 "data_offset": 0, 00:09:09.592 "data_size": 65536 00:09:09.592 } 00:09:09.592 ] 00:09:09.592 }' 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.592 11:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.851 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.851 [2024-11-04 11:41:35.362383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 [2024-11-04 11:41:35.509787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.110 [2024-11-04 11:41:35.509888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.110 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 BaseBdev2 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 [ 00:09:10.370 { 00:09:10.370 "name": "BaseBdev2", 00:09:10.370 "aliases": [ 00:09:10.370 "1d8f2d6b-46d1-40a8-95ec-3884e1e95594" 00:09:10.370 ], 00:09:10.370 "product_name": "Malloc disk", 00:09:10.370 "block_size": 512, 00:09:10.370 "num_blocks": 65536, 00:09:10.370 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:10.370 "assigned_rate_limits": { 00:09:10.370 "rw_ios_per_sec": 0, 00:09:10.370 "rw_mbytes_per_sec": 0, 00:09:10.370 "r_mbytes_per_sec": 0, 00:09:10.370 "w_mbytes_per_sec": 0 00:09:10.370 }, 00:09:10.370 "claimed": false, 00:09:10.370 "zoned": false, 00:09:10.370 "supported_io_types": { 00:09:10.370 "read": true, 00:09:10.370 "write": true, 00:09:10.370 "unmap": true, 00:09:10.370 "flush": true, 00:09:10.370 "reset": true, 00:09:10.370 "nvme_admin": false, 00:09:10.370 "nvme_io": false, 00:09:10.370 "nvme_io_md": false, 00:09:10.370 "write_zeroes": true, 00:09:10.370 "zcopy": true, 00:09:10.370 "get_zone_info": false, 00:09:10.370 "zone_management": false, 00:09:10.370 "zone_append": false, 00:09:10.370 "compare": false, 00:09:10.370 "compare_and_write": false, 00:09:10.370 "abort": true, 00:09:10.370 "seek_hole": false, 00:09:10.370 "seek_data": false, 00:09:10.370 "copy": true, 00:09:10.370 "nvme_iov_md": false 00:09:10.370 }, 00:09:10.370 "memory_domains": [ 00:09:10.370 { 00:09:10.370 "dma_device_id": "system", 00:09:10.370 "dma_device_type": 1 00:09:10.370 }, 00:09:10.370 { 00:09:10.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.370 "dma_device_type": 2 00:09:10.370 } 00:09:10.370 ], 00:09:10.370 "driver_specific": {} 00:09:10.370 } 00:09:10.370 ] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 BaseBdev3 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.370 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.370 [ 00:09:10.370 { 00:09:10.370 "name": "BaseBdev3", 00:09:10.370 "aliases": [ 00:09:10.370 "310afd15-e09a-4ca0-b12b-12ac70bcd814" 00:09:10.370 ], 00:09:10.370 "product_name": "Malloc disk", 00:09:10.370 "block_size": 512, 00:09:10.370 "num_blocks": 65536, 00:09:10.370 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:10.370 "assigned_rate_limits": { 00:09:10.370 "rw_ios_per_sec": 0, 00:09:10.370 "rw_mbytes_per_sec": 0, 00:09:10.371 "r_mbytes_per_sec": 0, 00:09:10.371 "w_mbytes_per_sec": 0 00:09:10.371 }, 00:09:10.371 "claimed": false, 00:09:10.371 "zoned": false, 00:09:10.371 "supported_io_types": { 00:09:10.371 "read": true, 00:09:10.371 "write": true, 00:09:10.371 "unmap": true, 00:09:10.371 "flush": true, 00:09:10.371 "reset": true, 00:09:10.371 "nvme_admin": false, 00:09:10.371 "nvme_io": false, 00:09:10.371 "nvme_io_md": false, 00:09:10.371 "write_zeroes": true, 00:09:10.371 "zcopy": true, 00:09:10.371 "get_zone_info": false, 00:09:10.371 "zone_management": false, 00:09:10.371 "zone_append": false, 00:09:10.371 "compare": false, 00:09:10.371 "compare_and_write": false, 00:09:10.371 "abort": true, 00:09:10.371 "seek_hole": false, 00:09:10.371 "seek_data": false, 00:09:10.371 "copy": true, 00:09:10.371 "nvme_iov_md": false 00:09:10.371 }, 00:09:10.371 "memory_domains": [ 00:09:10.371 { 00:09:10.371 "dma_device_id": "system", 00:09:10.371 "dma_device_type": 1 00:09:10.371 }, 00:09:10.371 { 00:09:10.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.371 "dma_device_type": 2 00:09:10.371 } 00:09:10.371 ], 00:09:10.371 "driver_specific": {} 00:09:10.371 } 00:09:10.371 ] 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 [2024-11-04 11:41:35.838166] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.371 [2024-11-04 11:41:35.838269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.371 [2024-11-04 11:41:35.838304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.371 [2024-11-04 11:41:35.840455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.371 "name": "Existed_Raid", 00:09:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.371 "strip_size_kb": 64, 00:09:10.371 "state": "configuring", 00:09:10.371 "raid_level": "raid0", 00:09:10.371 "superblock": false, 00:09:10.371 "num_base_bdevs": 3, 00:09:10.371 "num_base_bdevs_discovered": 2, 00:09:10.371 "num_base_bdevs_operational": 3, 00:09:10.371 "base_bdevs_list": [ 00:09:10.371 { 00:09:10.371 "name": "BaseBdev1", 00:09:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.371 "is_configured": false, 00:09:10.371 "data_offset": 0, 00:09:10.371 "data_size": 0 00:09:10.371 }, 00:09:10.371 { 00:09:10.371 "name": "BaseBdev2", 00:09:10.371 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:10.371 "is_configured": true, 00:09:10.371 "data_offset": 0, 00:09:10.371 "data_size": 65536 00:09:10.371 }, 00:09:10.371 { 00:09:10.371 "name": "BaseBdev3", 00:09:10.371 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:10.371 "is_configured": true, 00:09:10.371 "data_offset": 0, 00:09:10.371 "data_size": 65536 00:09:10.371 } 00:09:10.371 ] 00:09:10.371 }' 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.371 11:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 [2024-11-04 11:41:36.273424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.944 "name": "Existed_Raid", 00:09:10.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.944 "strip_size_kb": 64, 00:09:10.944 "state": "configuring", 00:09:10.944 "raid_level": "raid0", 00:09:10.944 "superblock": false, 00:09:10.944 "num_base_bdevs": 3, 00:09:10.944 "num_base_bdevs_discovered": 1, 00:09:10.944 "num_base_bdevs_operational": 3, 00:09:10.944 "base_bdevs_list": [ 00:09:10.944 { 00:09:10.944 "name": "BaseBdev1", 00:09:10.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.944 "is_configured": false, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 0 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": null, 00:09:10.944 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:10.944 "is_configured": false, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 65536 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": "BaseBdev3", 00:09:10.944 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 65536 00:09:10.944 } 00:09:10.944 ] 00:09:10.944 }' 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.944 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.204 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.204 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.463 [2024-11-04 11:41:36.787840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.463 BaseBdev1 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.463 [ 00:09:11.463 { 00:09:11.463 "name": "BaseBdev1", 00:09:11.463 "aliases": [ 00:09:11.463 "e8e615e4-af2a-42be-8c1f-4d149b223467" 00:09:11.463 ], 00:09:11.463 "product_name": "Malloc disk", 00:09:11.463 "block_size": 512, 00:09:11.463 "num_blocks": 65536, 00:09:11.463 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:11.463 "assigned_rate_limits": { 00:09:11.463 "rw_ios_per_sec": 0, 00:09:11.463 "rw_mbytes_per_sec": 0, 00:09:11.463 "r_mbytes_per_sec": 0, 00:09:11.463 "w_mbytes_per_sec": 0 00:09:11.463 }, 00:09:11.463 "claimed": true, 00:09:11.463 "claim_type": "exclusive_write", 00:09:11.463 "zoned": false, 00:09:11.463 "supported_io_types": { 00:09:11.463 "read": true, 00:09:11.463 "write": true, 00:09:11.463 "unmap": true, 00:09:11.463 "flush": true, 00:09:11.463 "reset": true, 00:09:11.463 "nvme_admin": false, 00:09:11.463 "nvme_io": false, 00:09:11.463 "nvme_io_md": false, 00:09:11.463 "write_zeroes": true, 00:09:11.463 "zcopy": true, 00:09:11.463 "get_zone_info": false, 00:09:11.463 "zone_management": false, 00:09:11.463 "zone_append": false, 00:09:11.463 "compare": false, 00:09:11.463 "compare_and_write": false, 00:09:11.463 "abort": true, 00:09:11.463 "seek_hole": false, 00:09:11.463 "seek_data": false, 00:09:11.463 "copy": true, 00:09:11.463 "nvme_iov_md": false 00:09:11.463 }, 00:09:11.463 "memory_domains": [ 00:09:11.463 { 00:09:11.463 "dma_device_id": "system", 00:09:11.463 "dma_device_type": 1 00:09:11.463 }, 00:09:11.463 { 00:09:11.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.463 "dma_device_type": 2 00:09:11.463 } 00:09:11.463 ], 00:09:11.463 "driver_specific": {} 00:09:11.463 } 00:09:11.463 ] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.463 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.464 "name": "Existed_Raid", 00:09:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.464 "strip_size_kb": 64, 00:09:11.464 "state": "configuring", 00:09:11.464 "raid_level": "raid0", 00:09:11.464 "superblock": false, 00:09:11.464 "num_base_bdevs": 3, 00:09:11.464 "num_base_bdevs_discovered": 2, 00:09:11.464 "num_base_bdevs_operational": 3, 00:09:11.464 "base_bdevs_list": [ 00:09:11.464 { 00:09:11.464 "name": "BaseBdev1", 00:09:11.464 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:11.464 "is_configured": true, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "name": null, 00:09:11.464 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:11.464 "is_configured": false, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "name": "BaseBdev3", 00:09:11.464 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:11.464 "is_configured": true, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 } 00:09:11.464 ] 00:09:11.464 }' 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.464 11:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.032 [2024-11-04 11:41:37.338978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.032 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.032 "name": "Existed_Raid", 00:09:12.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.032 "strip_size_kb": 64, 00:09:12.032 "state": "configuring", 00:09:12.032 "raid_level": "raid0", 00:09:12.032 "superblock": false, 00:09:12.032 "num_base_bdevs": 3, 00:09:12.032 "num_base_bdevs_discovered": 1, 00:09:12.032 "num_base_bdevs_operational": 3, 00:09:12.032 "base_bdevs_list": [ 00:09:12.032 { 00:09:12.032 "name": "BaseBdev1", 00:09:12.032 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:12.032 "is_configured": true, 00:09:12.032 "data_offset": 0, 00:09:12.032 "data_size": 65536 00:09:12.032 }, 00:09:12.032 { 00:09:12.032 "name": null, 00:09:12.032 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:12.033 "is_configured": false, 00:09:12.033 "data_offset": 0, 00:09:12.033 "data_size": 65536 00:09:12.033 }, 00:09:12.033 { 00:09:12.033 "name": null, 00:09:12.033 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:12.033 "is_configured": false, 00:09:12.033 "data_offset": 0, 00:09:12.033 "data_size": 65536 00:09:12.033 } 00:09:12.033 ] 00:09:12.033 }' 00:09:12.033 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.033 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.292 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.292 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.292 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.292 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.292 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.552 [2024-11-04 11:41:37.838213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.552 "name": "Existed_Raid", 00:09:12.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.552 "strip_size_kb": 64, 00:09:12.552 "state": "configuring", 00:09:12.552 "raid_level": "raid0", 00:09:12.552 "superblock": false, 00:09:12.552 "num_base_bdevs": 3, 00:09:12.552 "num_base_bdevs_discovered": 2, 00:09:12.552 "num_base_bdevs_operational": 3, 00:09:12.552 "base_bdevs_list": [ 00:09:12.552 { 00:09:12.552 "name": "BaseBdev1", 00:09:12.552 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:12.552 "is_configured": true, 00:09:12.552 "data_offset": 0, 00:09:12.552 "data_size": 65536 00:09:12.552 }, 00:09:12.552 { 00:09:12.552 "name": null, 00:09:12.552 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:12.552 "is_configured": false, 00:09:12.552 "data_offset": 0, 00:09:12.552 "data_size": 65536 00:09:12.552 }, 00:09:12.552 { 00:09:12.552 "name": "BaseBdev3", 00:09:12.552 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:12.552 "is_configured": true, 00:09:12.552 "data_offset": 0, 00:09:12.552 "data_size": 65536 00:09:12.552 } 00:09:12.552 ] 00:09:12.552 }' 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.552 11:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.812 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.812 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.812 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.812 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.071 [2024-11-04 11:41:38.377309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.071 "name": "Existed_Raid", 00:09:13.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.071 "strip_size_kb": 64, 00:09:13.071 "state": "configuring", 00:09:13.071 "raid_level": "raid0", 00:09:13.071 "superblock": false, 00:09:13.071 "num_base_bdevs": 3, 00:09:13.071 "num_base_bdevs_discovered": 1, 00:09:13.071 "num_base_bdevs_operational": 3, 00:09:13.071 "base_bdevs_list": [ 00:09:13.071 { 00:09:13.071 "name": null, 00:09:13.071 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:13.071 "is_configured": false, 00:09:13.071 "data_offset": 0, 00:09:13.071 "data_size": 65536 00:09:13.071 }, 00:09:13.071 { 00:09:13.071 "name": null, 00:09:13.071 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:13.071 "is_configured": false, 00:09:13.071 "data_offset": 0, 00:09:13.071 "data_size": 65536 00:09:13.071 }, 00:09:13.071 { 00:09:13.071 "name": "BaseBdev3", 00:09:13.071 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:13.071 "is_configured": true, 00:09:13.071 "data_offset": 0, 00:09:13.071 "data_size": 65536 00:09:13.071 } 00:09:13.071 ] 00:09:13.071 }' 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.071 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.677 [2024-11-04 11:41:38.964414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.677 11:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.677 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.677 "name": "Existed_Raid", 00:09:13.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.677 "strip_size_kb": 64, 00:09:13.677 "state": "configuring", 00:09:13.677 "raid_level": "raid0", 00:09:13.677 "superblock": false, 00:09:13.677 "num_base_bdevs": 3, 00:09:13.677 "num_base_bdevs_discovered": 2, 00:09:13.677 "num_base_bdevs_operational": 3, 00:09:13.677 "base_bdevs_list": [ 00:09:13.677 { 00:09:13.677 "name": null, 00:09:13.677 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:13.677 "is_configured": false, 00:09:13.677 "data_offset": 0, 00:09:13.677 "data_size": 65536 00:09:13.677 }, 00:09:13.677 { 00:09:13.677 "name": "BaseBdev2", 00:09:13.677 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:13.677 "is_configured": true, 00:09:13.677 "data_offset": 0, 00:09:13.677 "data_size": 65536 00:09:13.677 }, 00:09:13.677 { 00:09:13.677 "name": "BaseBdev3", 00:09:13.677 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:13.677 "is_configured": true, 00:09:13.678 "data_offset": 0, 00:09:13.678 "data_size": 65536 00:09:13.678 } 00:09:13.678 ] 00:09:13.678 }' 00:09:13.678 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.678 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:13.953 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e8e615e4-af2a-42be-8c1f-4d149b223467 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.213 [2024-11-04 11:41:39.530388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:14.213 [2024-11-04 11:41:39.530455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.213 [2024-11-04 11:41:39.530466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:14.213 [2024-11-04 11:41:39.530732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:14.213 [2024-11-04 11:41:39.530928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.213 [2024-11-04 11:41:39.530955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:14.213 [2024-11-04 11:41:39.531218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.213 NewBaseBdev 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.213 [ 00:09:14.213 { 00:09:14.213 "name": "NewBaseBdev", 00:09:14.213 "aliases": [ 00:09:14.213 "e8e615e4-af2a-42be-8c1f-4d149b223467" 00:09:14.213 ], 00:09:14.213 "product_name": "Malloc disk", 00:09:14.213 "block_size": 512, 00:09:14.213 "num_blocks": 65536, 00:09:14.213 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:14.213 "assigned_rate_limits": { 00:09:14.213 "rw_ios_per_sec": 0, 00:09:14.213 "rw_mbytes_per_sec": 0, 00:09:14.213 "r_mbytes_per_sec": 0, 00:09:14.213 "w_mbytes_per_sec": 0 00:09:14.213 }, 00:09:14.213 "claimed": true, 00:09:14.213 "claim_type": "exclusive_write", 00:09:14.213 "zoned": false, 00:09:14.213 "supported_io_types": { 00:09:14.213 "read": true, 00:09:14.213 "write": true, 00:09:14.213 "unmap": true, 00:09:14.213 "flush": true, 00:09:14.213 "reset": true, 00:09:14.213 "nvme_admin": false, 00:09:14.213 "nvme_io": false, 00:09:14.213 "nvme_io_md": false, 00:09:14.213 "write_zeroes": true, 00:09:14.213 "zcopy": true, 00:09:14.213 "get_zone_info": false, 00:09:14.213 "zone_management": false, 00:09:14.213 "zone_append": false, 00:09:14.213 "compare": false, 00:09:14.213 "compare_and_write": false, 00:09:14.213 "abort": true, 00:09:14.213 "seek_hole": false, 00:09:14.213 "seek_data": false, 00:09:14.213 "copy": true, 00:09:14.213 "nvme_iov_md": false 00:09:14.213 }, 00:09:14.213 "memory_domains": [ 00:09:14.213 { 00:09:14.213 "dma_device_id": "system", 00:09:14.213 "dma_device_type": 1 00:09:14.213 }, 00:09:14.213 { 00:09:14.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.213 "dma_device_type": 2 00:09:14.213 } 00:09:14.213 ], 00:09:14.213 "driver_specific": {} 00:09:14.213 } 00:09:14.213 ] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.213 "name": "Existed_Raid", 00:09:14.213 "uuid": "10c18979-a45d-40b5-884c-0bb4f6497032", 00:09:14.213 "strip_size_kb": 64, 00:09:14.213 "state": "online", 00:09:14.213 "raid_level": "raid0", 00:09:14.213 "superblock": false, 00:09:14.213 "num_base_bdevs": 3, 00:09:14.213 "num_base_bdevs_discovered": 3, 00:09:14.213 "num_base_bdevs_operational": 3, 00:09:14.213 "base_bdevs_list": [ 00:09:14.213 { 00:09:14.213 "name": "NewBaseBdev", 00:09:14.213 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:14.213 "is_configured": true, 00:09:14.213 "data_offset": 0, 00:09:14.213 "data_size": 65536 00:09:14.213 }, 00:09:14.213 { 00:09:14.213 "name": "BaseBdev2", 00:09:14.213 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:14.213 "is_configured": true, 00:09:14.213 "data_offset": 0, 00:09:14.213 "data_size": 65536 00:09:14.213 }, 00:09:14.213 { 00:09:14.213 "name": "BaseBdev3", 00:09:14.213 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:14.213 "is_configured": true, 00:09:14.213 "data_offset": 0, 00:09:14.213 "data_size": 65536 00:09:14.213 } 00:09:14.213 ] 00:09:14.213 }' 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.213 11:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.781 [2024-11-04 11:41:40.057886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.781 "name": "Existed_Raid", 00:09:14.781 "aliases": [ 00:09:14.781 "10c18979-a45d-40b5-884c-0bb4f6497032" 00:09:14.781 ], 00:09:14.781 "product_name": "Raid Volume", 00:09:14.781 "block_size": 512, 00:09:14.781 "num_blocks": 196608, 00:09:14.781 "uuid": "10c18979-a45d-40b5-884c-0bb4f6497032", 00:09:14.781 "assigned_rate_limits": { 00:09:14.781 "rw_ios_per_sec": 0, 00:09:14.781 "rw_mbytes_per_sec": 0, 00:09:14.781 "r_mbytes_per_sec": 0, 00:09:14.781 "w_mbytes_per_sec": 0 00:09:14.781 }, 00:09:14.781 "claimed": false, 00:09:14.781 "zoned": false, 00:09:14.781 "supported_io_types": { 00:09:14.781 "read": true, 00:09:14.781 "write": true, 00:09:14.781 "unmap": true, 00:09:14.781 "flush": true, 00:09:14.781 "reset": true, 00:09:14.781 "nvme_admin": false, 00:09:14.781 "nvme_io": false, 00:09:14.781 "nvme_io_md": false, 00:09:14.781 "write_zeroes": true, 00:09:14.781 "zcopy": false, 00:09:14.781 "get_zone_info": false, 00:09:14.781 "zone_management": false, 00:09:14.781 "zone_append": false, 00:09:14.781 "compare": false, 00:09:14.781 "compare_and_write": false, 00:09:14.781 "abort": false, 00:09:14.781 "seek_hole": false, 00:09:14.781 "seek_data": false, 00:09:14.781 "copy": false, 00:09:14.781 "nvme_iov_md": false 00:09:14.781 }, 00:09:14.781 "memory_domains": [ 00:09:14.781 { 00:09:14.781 "dma_device_id": "system", 00:09:14.781 "dma_device_type": 1 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.781 "dma_device_type": 2 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "dma_device_id": "system", 00:09:14.781 "dma_device_type": 1 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.781 "dma_device_type": 2 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "dma_device_id": "system", 00:09:14.781 "dma_device_type": 1 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.781 "dma_device_type": 2 00:09:14.781 } 00:09:14.781 ], 00:09:14.781 "driver_specific": { 00:09:14.781 "raid": { 00:09:14.781 "uuid": "10c18979-a45d-40b5-884c-0bb4f6497032", 00:09:14.781 "strip_size_kb": 64, 00:09:14.781 "state": "online", 00:09:14.781 "raid_level": "raid0", 00:09:14.781 "superblock": false, 00:09:14.781 "num_base_bdevs": 3, 00:09:14.781 "num_base_bdevs_discovered": 3, 00:09:14.781 "num_base_bdevs_operational": 3, 00:09:14.781 "base_bdevs_list": [ 00:09:14.781 { 00:09:14.781 "name": "NewBaseBdev", 00:09:14.781 "uuid": "e8e615e4-af2a-42be-8c1f-4d149b223467", 00:09:14.781 "is_configured": true, 00:09:14.781 "data_offset": 0, 00:09:14.781 "data_size": 65536 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "name": "BaseBdev2", 00:09:14.781 "uuid": "1d8f2d6b-46d1-40a8-95ec-3884e1e95594", 00:09:14.781 "is_configured": true, 00:09:14.781 "data_offset": 0, 00:09:14.781 "data_size": 65536 00:09:14.781 }, 00:09:14.781 { 00:09:14.781 "name": "BaseBdev3", 00:09:14.781 "uuid": "310afd15-e09a-4ca0-b12b-12ac70bcd814", 00:09:14.781 "is_configured": true, 00:09:14.781 "data_offset": 0, 00:09:14.781 "data_size": 65536 00:09:14.781 } 00:09:14.781 ] 00:09:14.781 } 00:09:14.781 } 00:09:14.781 }' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.781 BaseBdev2 00:09:14.781 BaseBdev3' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.781 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.782 [2024-11-04 11:41:40.293270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.782 [2024-11-04 11:41:40.293461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.782 [2024-11-04 11:41:40.293677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.782 [2024-11-04 11:41:40.293832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.782 [2024-11-04 11:41:40.293906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64025 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64025 ']' 00:09:14.782 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64025 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64025 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64025' 00:09:15.041 killing process with pid 64025 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64025 00:09:15.041 11:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64025 00:09:15.041 [2024-11-04 11:41:40.343314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.300 [2024-11-04 11:41:40.754816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.679 11:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.679 00:09:16.679 real 0m11.314s 00:09:16.679 user 0m17.806s 00:09:16.679 sys 0m1.855s 00:09:16.679 11:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.679 ************************************ 00:09:16.679 END TEST raid_state_function_test 00:09:16.679 ************************************ 00:09:16.679 11:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.939 11:41:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:16.939 11:41:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:16.939 11:41:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.939 11:41:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.939 ************************************ 00:09:16.939 START TEST raid_state_function_test_sb 00:09:16.939 ************************************ 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64657 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64657' 00:09:16.939 Process raid pid: 64657 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64657 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64657 ']' 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:16.939 11:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.940 [2024-11-04 11:41:42.334591] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:16.940 [2024-11-04 11:41:42.334825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.198 [2024-11-04 11:41:42.518042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.198 [2024-11-04 11:41:42.681060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.456 [2024-11-04 11:41:42.968034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.456 [2024-11-04 11:41:42.968210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.714 [2024-11-04 11:41:43.219461] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.714 [2024-11-04 11:41:43.219557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.714 [2024-11-04 11:41:43.219570] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.714 [2024-11-04 11:41:43.219580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.714 [2024-11-04 11:41:43.219587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.714 [2024-11-04 11:41:43.219598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.714 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.973 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.973 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.973 "name": "Existed_Raid", 00:09:17.973 "uuid": "672518a4-b00b-42ef-a237-2560009728f6", 00:09:17.973 "strip_size_kb": 64, 00:09:17.973 "state": "configuring", 00:09:17.973 "raid_level": "raid0", 00:09:17.973 "superblock": true, 00:09:17.973 "num_base_bdevs": 3, 00:09:17.973 "num_base_bdevs_discovered": 0, 00:09:17.973 "num_base_bdevs_operational": 3, 00:09:17.973 "base_bdevs_list": [ 00:09:17.973 { 00:09:17.973 "name": "BaseBdev1", 00:09:17.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.973 "is_configured": false, 00:09:17.973 "data_offset": 0, 00:09:17.973 "data_size": 0 00:09:17.973 }, 00:09:17.973 { 00:09:17.973 "name": "BaseBdev2", 00:09:17.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.973 "is_configured": false, 00:09:17.973 "data_offset": 0, 00:09:17.973 "data_size": 0 00:09:17.973 }, 00:09:17.973 { 00:09:17.973 "name": "BaseBdev3", 00:09:17.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.973 "is_configured": false, 00:09:17.973 "data_offset": 0, 00:09:17.973 "data_size": 0 00:09:17.973 } 00:09:17.973 ] 00:09:17.973 }' 00:09:17.973 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.973 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.231 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.232 [2024-11-04 11:41:43.710625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.232 [2024-11-04 11:41:43.710785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.232 [2024-11-04 11:41:43.722597] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.232 [2024-11-04 11:41:43.722762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.232 [2024-11-04 11:41:43.722796] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.232 [2024-11-04 11:41:43.722825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.232 [2024-11-04 11:41:43.722848] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.232 [2024-11-04 11:41:43.722874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.232 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.489 [2024-11-04 11:41:43.786317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.489 BaseBdev1 00:09:18.489 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.490 [ 00:09:18.490 { 00:09:18.490 "name": "BaseBdev1", 00:09:18.490 "aliases": [ 00:09:18.490 "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2" 00:09:18.490 ], 00:09:18.490 "product_name": "Malloc disk", 00:09:18.490 "block_size": 512, 00:09:18.490 "num_blocks": 65536, 00:09:18.490 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:18.490 "assigned_rate_limits": { 00:09:18.490 "rw_ios_per_sec": 0, 00:09:18.490 "rw_mbytes_per_sec": 0, 00:09:18.490 "r_mbytes_per_sec": 0, 00:09:18.490 "w_mbytes_per_sec": 0 00:09:18.490 }, 00:09:18.490 "claimed": true, 00:09:18.490 "claim_type": "exclusive_write", 00:09:18.490 "zoned": false, 00:09:18.490 "supported_io_types": { 00:09:18.490 "read": true, 00:09:18.490 "write": true, 00:09:18.490 "unmap": true, 00:09:18.490 "flush": true, 00:09:18.490 "reset": true, 00:09:18.490 "nvme_admin": false, 00:09:18.490 "nvme_io": false, 00:09:18.490 "nvme_io_md": false, 00:09:18.490 "write_zeroes": true, 00:09:18.490 "zcopy": true, 00:09:18.490 "get_zone_info": false, 00:09:18.490 "zone_management": false, 00:09:18.490 "zone_append": false, 00:09:18.490 "compare": false, 00:09:18.490 "compare_and_write": false, 00:09:18.490 "abort": true, 00:09:18.490 "seek_hole": false, 00:09:18.490 "seek_data": false, 00:09:18.490 "copy": true, 00:09:18.490 "nvme_iov_md": false 00:09:18.490 }, 00:09:18.490 "memory_domains": [ 00:09:18.490 { 00:09:18.490 "dma_device_id": "system", 00:09:18.490 "dma_device_type": 1 00:09:18.490 }, 00:09:18.490 { 00:09:18.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.490 "dma_device_type": 2 00:09:18.490 } 00:09:18.490 ], 00:09:18.490 "driver_specific": {} 00:09:18.490 } 00:09:18.490 ] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.490 "name": "Existed_Raid", 00:09:18.490 "uuid": "02f49160-0996-4e17-8372-f1ebc871669f", 00:09:18.490 "strip_size_kb": 64, 00:09:18.490 "state": "configuring", 00:09:18.490 "raid_level": "raid0", 00:09:18.490 "superblock": true, 00:09:18.490 "num_base_bdevs": 3, 00:09:18.490 "num_base_bdevs_discovered": 1, 00:09:18.490 "num_base_bdevs_operational": 3, 00:09:18.490 "base_bdevs_list": [ 00:09:18.490 { 00:09:18.490 "name": "BaseBdev1", 00:09:18.490 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:18.490 "is_configured": true, 00:09:18.490 "data_offset": 2048, 00:09:18.490 "data_size": 63488 00:09:18.490 }, 00:09:18.490 { 00:09:18.490 "name": "BaseBdev2", 00:09:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.490 "is_configured": false, 00:09:18.490 "data_offset": 0, 00:09:18.490 "data_size": 0 00:09:18.490 }, 00:09:18.490 { 00:09:18.490 "name": "BaseBdev3", 00:09:18.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.490 "is_configured": false, 00:09:18.490 "data_offset": 0, 00:09:18.490 "data_size": 0 00:09:18.490 } 00:09:18.490 ] 00:09:18.490 }' 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.490 11:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.057 [2024-11-04 11:41:44.297582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.057 [2024-11-04 11:41:44.297682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.057 [2024-11-04 11:41:44.309632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.057 [2024-11-04 11:41:44.312220] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.057 [2024-11-04 11:41:44.312275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.057 [2024-11-04 11:41:44.312287] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.057 [2024-11-04 11:41:44.312297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.057 "name": "Existed_Raid", 00:09:19.057 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:19.057 "strip_size_kb": 64, 00:09:19.057 "state": "configuring", 00:09:19.057 "raid_level": "raid0", 00:09:19.057 "superblock": true, 00:09:19.057 "num_base_bdevs": 3, 00:09:19.057 "num_base_bdevs_discovered": 1, 00:09:19.057 "num_base_bdevs_operational": 3, 00:09:19.057 "base_bdevs_list": [ 00:09:19.057 { 00:09:19.057 "name": "BaseBdev1", 00:09:19.057 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:19.057 "is_configured": true, 00:09:19.057 "data_offset": 2048, 00:09:19.057 "data_size": 63488 00:09:19.057 }, 00:09:19.057 { 00:09:19.057 "name": "BaseBdev2", 00:09:19.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.057 "is_configured": false, 00:09:19.057 "data_offset": 0, 00:09:19.057 "data_size": 0 00:09:19.057 }, 00:09:19.057 { 00:09:19.057 "name": "BaseBdev3", 00:09:19.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.057 "is_configured": false, 00:09:19.057 "data_offset": 0, 00:09:19.057 "data_size": 0 00:09:19.057 } 00:09:19.057 ] 00:09:19.057 }' 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.057 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.316 [2024-11-04 11:41:44.821515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.316 BaseBdev2 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.316 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.575 [ 00:09:19.575 { 00:09:19.575 "name": "BaseBdev2", 00:09:19.575 "aliases": [ 00:09:19.575 "88586dea-8b79-4b83-9321-96e88a46a6a6" 00:09:19.575 ], 00:09:19.575 "product_name": "Malloc disk", 00:09:19.575 "block_size": 512, 00:09:19.575 "num_blocks": 65536, 00:09:19.575 "uuid": "88586dea-8b79-4b83-9321-96e88a46a6a6", 00:09:19.575 "assigned_rate_limits": { 00:09:19.575 "rw_ios_per_sec": 0, 00:09:19.575 "rw_mbytes_per_sec": 0, 00:09:19.575 "r_mbytes_per_sec": 0, 00:09:19.575 "w_mbytes_per_sec": 0 00:09:19.575 }, 00:09:19.575 "claimed": true, 00:09:19.575 "claim_type": "exclusive_write", 00:09:19.575 "zoned": false, 00:09:19.575 "supported_io_types": { 00:09:19.575 "read": true, 00:09:19.575 "write": true, 00:09:19.575 "unmap": true, 00:09:19.575 "flush": true, 00:09:19.575 "reset": true, 00:09:19.575 "nvme_admin": false, 00:09:19.575 "nvme_io": false, 00:09:19.575 "nvme_io_md": false, 00:09:19.575 "write_zeroes": true, 00:09:19.575 "zcopy": true, 00:09:19.575 "get_zone_info": false, 00:09:19.575 "zone_management": false, 00:09:19.575 "zone_append": false, 00:09:19.575 "compare": false, 00:09:19.575 "compare_and_write": false, 00:09:19.575 "abort": true, 00:09:19.575 "seek_hole": false, 00:09:19.575 "seek_data": false, 00:09:19.575 "copy": true, 00:09:19.575 "nvme_iov_md": false 00:09:19.575 }, 00:09:19.575 "memory_domains": [ 00:09:19.575 { 00:09:19.575 "dma_device_id": "system", 00:09:19.575 "dma_device_type": 1 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.575 "dma_device_type": 2 00:09:19.575 } 00:09:19.575 ], 00:09:19.575 "driver_specific": {} 00:09:19.575 } 00:09:19.575 ] 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.575 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.575 "name": "Existed_Raid", 00:09:19.575 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:19.575 "strip_size_kb": 64, 00:09:19.575 "state": "configuring", 00:09:19.575 "raid_level": "raid0", 00:09:19.575 "superblock": true, 00:09:19.575 "num_base_bdevs": 3, 00:09:19.575 "num_base_bdevs_discovered": 2, 00:09:19.575 "num_base_bdevs_operational": 3, 00:09:19.575 "base_bdevs_list": [ 00:09:19.575 { 00:09:19.575 "name": "BaseBdev1", 00:09:19.575 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:19.575 "is_configured": true, 00:09:19.575 "data_offset": 2048, 00:09:19.575 "data_size": 63488 00:09:19.575 }, 00:09:19.575 { 00:09:19.575 "name": "BaseBdev2", 00:09:19.576 "uuid": "88586dea-8b79-4b83-9321-96e88a46a6a6", 00:09:19.576 "is_configured": true, 00:09:19.576 "data_offset": 2048, 00:09:19.576 "data_size": 63488 00:09:19.576 }, 00:09:19.576 { 00:09:19.576 "name": "BaseBdev3", 00:09:19.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.576 "is_configured": false, 00:09:19.576 "data_offset": 0, 00:09:19.576 "data_size": 0 00:09:19.576 } 00:09:19.576 ] 00:09:19.576 }' 00:09:19.576 11:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.576 11:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.834 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.834 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.834 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.093 [2024-11-04 11:41:45.370068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.093 [2024-11-04 11:41:45.370596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.093 [2024-11-04 11:41:45.370650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.093 BaseBdev3 00:09:20.093 [2024-11-04 11:41:45.371020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.093 [2024-11-04 11:41:45.371237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.093 [2024-11-04 11:41:45.371258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.093 [2024-11-04 11:41:45.371467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.093 [ 00:09:20.093 { 00:09:20.093 "name": "BaseBdev3", 00:09:20.093 "aliases": [ 00:09:20.093 "ea32e3b2-5bf8-429b-baa5-ec29212dec97" 00:09:20.093 ], 00:09:20.093 "product_name": "Malloc disk", 00:09:20.093 "block_size": 512, 00:09:20.093 "num_blocks": 65536, 00:09:20.093 "uuid": "ea32e3b2-5bf8-429b-baa5-ec29212dec97", 00:09:20.093 "assigned_rate_limits": { 00:09:20.093 "rw_ios_per_sec": 0, 00:09:20.093 "rw_mbytes_per_sec": 0, 00:09:20.093 "r_mbytes_per_sec": 0, 00:09:20.093 "w_mbytes_per_sec": 0 00:09:20.093 }, 00:09:20.093 "claimed": true, 00:09:20.093 "claim_type": "exclusive_write", 00:09:20.093 "zoned": false, 00:09:20.093 "supported_io_types": { 00:09:20.093 "read": true, 00:09:20.093 "write": true, 00:09:20.093 "unmap": true, 00:09:20.093 "flush": true, 00:09:20.093 "reset": true, 00:09:20.093 "nvme_admin": false, 00:09:20.093 "nvme_io": false, 00:09:20.093 "nvme_io_md": false, 00:09:20.093 "write_zeroes": true, 00:09:20.093 "zcopy": true, 00:09:20.093 "get_zone_info": false, 00:09:20.093 "zone_management": false, 00:09:20.093 "zone_append": false, 00:09:20.093 "compare": false, 00:09:20.093 "compare_and_write": false, 00:09:20.093 "abort": true, 00:09:20.093 "seek_hole": false, 00:09:20.093 "seek_data": false, 00:09:20.093 "copy": true, 00:09:20.093 "nvme_iov_md": false 00:09:20.093 }, 00:09:20.093 "memory_domains": [ 00:09:20.093 { 00:09:20.093 "dma_device_id": "system", 00:09:20.093 "dma_device_type": 1 00:09:20.093 }, 00:09:20.093 { 00:09:20.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.093 "dma_device_type": 2 00:09:20.093 } 00:09:20.093 ], 00:09:20.093 "driver_specific": {} 00:09:20.093 } 00:09:20.093 ] 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.093 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.093 "name": "Existed_Raid", 00:09:20.093 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:20.093 "strip_size_kb": 64, 00:09:20.093 "state": "online", 00:09:20.093 "raid_level": "raid0", 00:09:20.093 "superblock": true, 00:09:20.093 "num_base_bdevs": 3, 00:09:20.093 "num_base_bdevs_discovered": 3, 00:09:20.093 "num_base_bdevs_operational": 3, 00:09:20.093 "base_bdevs_list": [ 00:09:20.093 { 00:09:20.093 "name": "BaseBdev1", 00:09:20.093 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:20.093 "is_configured": true, 00:09:20.093 "data_offset": 2048, 00:09:20.093 "data_size": 63488 00:09:20.093 }, 00:09:20.093 { 00:09:20.093 "name": "BaseBdev2", 00:09:20.093 "uuid": "88586dea-8b79-4b83-9321-96e88a46a6a6", 00:09:20.093 "is_configured": true, 00:09:20.093 "data_offset": 2048, 00:09:20.093 "data_size": 63488 00:09:20.093 }, 00:09:20.093 { 00:09:20.093 "name": "BaseBdev3", 00:09:20.094 "uuid": "ea32e3b2-5bf8-429b-baa5-ec29212dec97", 00:09:20.094 "is_configured": true, 00:09:20.094 "data_offset": 2048, 00:09:20.094 "data_size": 63488 00:09:20.094 } 00:09:20.094 ] 00:09:20.094 }' 00:09:20.094 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.094 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.352 [2024-11-04 11:41:45.853837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.352 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.611 "name": "Existed_Raid", 00:09:20.611 "aliases": [ 00:09:20.611 "5eba1566-18a2-47f5-99b4-93eed0293682" 00:09:20.611 ], 00:09:20.611 "product_name": "Raid Volume", 00:09:20.611 "block_size": 512, 00:09:20.611 "num_blocks": 190464, 00:09:20.611 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:20.611 "assigned_rate_limits": { 00:09:20.611 "rw_ios_per_sec": 0, 00:09:20.611 "rw_mbytes_per_sec": 0, 00:09:20.611 "r_mbytes_per_sec": 0, 00:09:20.611 "w_mbytes_per_sec": 0 00:09:20.611 }, 00:09:20.611 "claimed": false, 00:09:20.611 "zoned": false, 00:09:20.611 "supported_io_types": { 00:09:20.611 "read": true, 00:09:20.611 "write": true, 00:09:20.611 "unmap": true, 00:09:20.611 "flush": true, 00:09:20.611 "reset": true, 00:09:20.611 "nvme_admin": false, 00:09:20.611 "nvme_io": false, 00:09:20.611 "nvme_io_md": false, 00:09:20.611 "write_zeroes": true, 00:09:20.611 "zcopy": false, 00:09:20.611 "get_zone_info": false, 00:09:20.611 "zone_management": false, 00:09:20.611 "zone_append": false, 00:09:20.611 "compare": false, 00:09:20.611 "compare_and_write": false, 00:09:20.611 "abort": false, 00:09:20.611 "seek_hole": false, 00:09:20.611 "seek_data": false, 00:09:20.611 "copy": false, 00:09:20.611 "nvme_iov_md": false 00:09:20.611 }, 00:09:20.611 "memory_domains": [ 00:09:20.611 { 00:09:20.611 "dma_device_id": "system", 00:09:20.611 "dma_device_type": 1 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.611 "dma_device_type": 2 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "dma_device_id": "system", 00:09:20.611 "dma_device_type": 1 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.611 "dma_device_type": 2 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "dma_device_id": "system", 00:09:20.611 "dma_device_type": 1 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.611 "dma_device_type": 2 00:09:20.611 } 00:09:20.611 ], 00:09:20.611 "driver_specific": { 00:09:20.611 "raid": { 00:09:20.611 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:20.611 "strip_size_kb": 64, 00:09:20.611 "state": "online", 00:09:20.611 "raid_level": "raid0", 00:09:20.611 "superblock": true, 00:09:20.611 "num_base_bdevs": 3, 00:09:20.611 "num_base_bdevs_discovered": 3, 00:09:20.611 "num_base_bdevs_operational": 3, 00:09:20.611 "base_bdevs_list": [ 00:09:20.611 { 00:09:20.611 "name": "BaseBdev1", 00:09:20.611 "uuid": "63c56bc3-fb68-4f3c-a97d-6d070c7eebe2", 00:09:20.611 "is_configured": true, 00:09:20.611 "data_offset": 2048, 00:09:20.611 "data_size": 63488 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "name": "BaseBdev2", 00:09:20.611 "uuid": "88586dea-8b79-4b83-9321-96e88a46a6a6", 00:09:20.611 "is_configured": true, 00:09:20.611 "data_offset": 2048, 00:09:20.611 "data_size": 63488 00:09:20.611 }, 00:09:20.611 { 00:09:20.611 "name": "BaseBdev3", 00:09:20.611 "uuid": "ea32e3b2-5bf8-429b-baa5-ec29212dec97", 00:09:20.611 "is_configured": true, 00:09:20.611 "data_offset": 2048, 00:09:20.611 "data_size": 63488 00:09:20.611 } 00:09:20.611 ] 00:09:20.611 } 00:09:20.611 } 00:09:20.611 }' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.611 BaseBdev2 00:09:20.611 BaseBdev3' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.611 11:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.611 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.871 [2024-11-04 11:41:46.137051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.871 [2024-11-04 11:41:46.137182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.871 [2024-11-04 11:41:46.137317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.871 "name": "Existed_Raid", 00:09:20.871 "uuid": "5eba1566-18a2-47f5-99b4-93eed0293682", 00:09:20.871 "strip_size_kb": 64, 00:09:20.871 "state": "offline", 00:09:20.871 "raid_level": "raid0", 00:09:20.871 "superblock": true, 00:09:20.871 "num_base_bdevs": 3, 00:09:20.871 "num_base_bdevs_discovered": 2, 00:09:20.871 "num_base_bdevs_operational": 2, 00:09:20.871 "base_bdevs_list": [ 00:09:20.871 { 00:09:20.871 "name": null, 00:09:20.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.871 "is_configured": false, 00:09:20.871 "data_offset": 0, 00:09:20.871 "data_size": 63488 00:09:20.871 }, 00:09:20.871 { 00:09:20.871 "name": "BaseBdev2", 00:09:20.871 "uuid": "88586dea-8b79-4b83-9321-96e88a46a6a6", 00:09:20.871 "is_configured": true, 00:09:20.871 "data_offset": 2048, 00:09:20.871 "data_size": 63488 00:09:20.871 }, 00:09:20.871 { 00:09:20.871 "name": "BaseBdev3", 00:09:20.871 "uuid": "ea32e3b2-5bf8-429b-baa5-ec29212dec97", 00:09:20.871 "is_configured": true, 00:09:20.871 "data_offset": 2048, 00:09:20.871 "data_size": 63488 00:09:20.871 } 00:09:20.871 ] 00:09:20.871 }' 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.871 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.438 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.438 [2024-11-04 11:41:46.837092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.696 11:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.696 [2024-11-04 11:41:47.021496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.696 [2024-11-04 11:41:47.021593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.696 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 BaseBdev2 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.956 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 [ 00:09:21.956 { 00:09:21.956 "name": "BaseBdev2", 00:09:21.956 "aliases": [ 00:09:21.956 "60a26e5e-eb43-4f17-ba3b-0fb485b386c9" 00:09:21.956 ], 00:09:21.956 "product_name": "Malloc disk", 00:09:21.956 "block_size": 512, 00:09:21.956 "num_blocks": 65536, 00:09:21.956 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:21.956 "assigned_rate_limits": { 00:09:21.956 "rw_ios_per_sec": 0, 00:09:21.956 "rw_mbytes_per_sec": 0, 00:09:21.956 "r_mbytes_per_sec": 0, 00:09:21.956 "w_mbytes_per_sec": 0 00:09:21.956 }, 00:09:21.956 "claimed": false, 00:09:21.956 "zoned": false, 00:09:21.956 "supported_io_types": { 00:09:21.956 "read": true, 00:09:21.956 "write": true, 00:09:21.956 "unmap": true, 00:09:21.956 "flush": true, 00:09:21.956 "reset": true, 00:09:21.956 "nvme_admin": false, 00:09:21.956 "nvme_io": false, 00:09:21.956 "nvme_io_md": false, 00:09:21.956 "write_zeroes": true, 00:09:21.956 "zcopy": true, 00:09:21.956 "get_zone_info": false, 00:09:21.956 "zone_management": false, 00:09:21.956 "zone_append": false, 00:09:21.956 "compare": false, 00:09:21.956 "compare_and_write": false, 00:09:21.956 "abort": true, 00:09:21.956 "seek_hole": false, 00:09:21.956 "seek_data": false, 00:09:21.956 "copy": true, 00:09:21.956 "nvme_iov_md": false 00:09:21.956 }, 00:09:21.956 "memory_domains": [ 00:09:21.956 { 00:09:21.956 "dma_device_id": "system", 00:09:21.956 "dma_device_type": 1 00:09:21.956 }, 00:09:21.956 { 00:09:21.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.957 "dma_device_type": 2 00:09:21.957 } 00:09:21.957 ], 00:09:21.957 "driver_specific": {} 00:09:21.957 } 00:09:21.957 ] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.957 BaseBdev3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.957 [ 00:09:21.957 { 00:09:21.957 "name": "BaseBdev3", 00:09:21.957 "aliases": [ 00:09:21.957 "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb" 00:09:21.957 ], 00:09:21.957 "product_name": "Malloc disk", 00:09:21.957 "block_size": 512, 00:09:21.957 "num_blocks": 65536, 00:09:21.957 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:21.957 "assigned_rate_limits": { 00:09:21.957 "rw_ios_per_sec": 0, 00:09:21.957 "rw_mbytes_per_sec": 0, 00:09:21.957 "r_mbytes_per_sec": 0, 00:09:21.957 "w_mbytes_per_sec": 0 00:09:21.957 }, 00:09:21.957 "claimed": false, 00:09:21.957 "zoned": false, 00:09:21.957 "supported_io_types": { 00:09:21.957 "read": true, 00:09:21.957 "write": true, 00:09:21.957 "unmap": true, 00:09:21.957 "flush": true, 00:09:21.957 "reset": true, 00:09:21.957 "nvme_admin": false, 00:09:21.957 "nvme_io": false, 00:09:21.957 "nvme_io_md": false, 00:09:21.957 "write_zeroes": true, 00:09:21.957 "zcopy": true, 00:09:21.957 "get_zone_info": false, 00:09:21.957 "zone_management": false, 00:09:21.957 "zone_append": false, 00:09:21.957 "compare": false, 00:09:21.957 "compare_and_write": false, 00:09:21.957 "abort": true, 00:09:21.957 "seek_hole": false, 00:09:21.957 "seek_data": false, 00:09:21.957 "copy": true, 00:09:21.957 "nvme_iov_md": false 00:09:21.957 }, 00:09:21.957 "memory_domains": [ 00:09:21.957 { 00:09:21.957 "dma_device_id": "system", 00:09:21.957 "dma_device_type": 1 00:09:21.957 }, 00:09:21.957 { 00:09:21.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.957 "dma_device_type": 2 00:09:21.957 } 00:09:21.957 ], 00:09:21.957 "driver_specific": {} 00:09:21.957 } 00:09:21.957 ] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.957 [2024-11-04 11:41:47.391300] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.957 [2024-11-04 11:41:47.391470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.957 [2024-11-04 11:41:47.391546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.957 [2024-11-04 11:41:47.394520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.957 "name": "Existed_Raid", 00:09:21.957 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:21.957 "strip_size_kb": 64, 00:09:21.957 "state": "configuring", 00:09:21.957 "raid_level": "raid0", 00:09:21.957 "superblock": true, 00:09:21.957 "num_base_bdevs": 3, 00:09:21.957 "num_base_bdevs_discovered": 2, 00:09:21.957 "num_base_bdevs_operational": 3, 00:09:21.957 "base_bdevs_list": [ 00:09:21.957 { 00:09:21.957 "name": "BaseBdev1", 00:09:21.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.957 "is_configured": false, 00:09:21.957 "data_offset": 0, 00:09:21.957 "data_size": 0 00:09:21.957 }, 00:09:21.957 { 00:09:21.957 "name": "BaseBdev2", 00:09:21.957 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:21.957 "is_configured": true, 00:09:21.957 "data_offset": 2048, 00:09:21.957 "data_size": 63488 00:09:21.957 }, 00:09:21.957 { 00:09:21.957 "name": "BaseBdev3", 00:09:21.957 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:21.957 "is_configured": true, 00:09:21.957 "data_offset": 2048, 00:09:21.957 "data_size": 63488 00:09:21.957 } 00:09:21.957 ] 00:09:21.957 }' 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.957 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.242 [2024-11-04 11:41:47.754638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.242 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.502 "name": "Existed_Raid", 00:09:22.502 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:22.502 "strip_size_kb": 64, 00:09:22.502 "state": "configuring", 00:09:22.502 "raid_level": "raid0", 00:09:22.502 "superblock": true, 00:09:22.502 "num_base_bdevs": 3, 00:09:22.502 "num_base_bdevs_discovered": 1, 00:09:22.502 "num_base_bdevs_operational": 3, 00:09:22.502 "base_bdevs_list": [ 00:09:22.502 { 00:09:22.502 "name": "BaseBdev1", 00:09:22.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.502 "is_configured": false, 00:09:22.502 "data_offset": 0, 00:09:22.502 "data_size": 0 00:09:22.502 }, 00:09:22.502 { 00:09:22.502 "name": null, 00:09:22.502 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:22.502 "is_configured": false, 00:09:22.502 "data_offset": 0, 00:09:22.502 "data_size": 63488 00:09:22.502 }, 00:09:22.502 { 00:09:22.502 "name": "BaseBdev3", 00:09:22.502 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:22.502 "is_configured": true, 00:09:22.502 "data_offset": 2048, 00:09:22.502 "data_size": 63488 00:09:22.502 } 00:09:22.502 ] 00:09:22.502 }' 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.502 11:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.761 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.021 [2024-11-04 11:41:48.322806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.021 BaseBdev1 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.021 [ 00:09:23.021 { 00:09:23.021 "name": "BaseBdev1", 00:09:23.021 "aliases": [ 00:09:23.021 "6335847e-c356-4192-80b9-d6c0103c4e2c" 00:09:23.021 ], 00:09:23.021 "product_name": "Malloc disk", 00:09:23.021 "block_size": 512, 00:09:23.021 "num_blocks": 65536, 00:09:23.021 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:23.021 "assigned_rate_limits": { 00:09:23.021 "rw_ios_per_sec": 0, 00:09:23.021 "rw_mbytes_per_sec": 0, 00:09:23.021 "r_mbytes_per_sec": 0, 00:09:23.021 "w_mbytes_per_sec": 0 00:09:23.021 }, 00:09:23.021 "claimed": true, 00:09:23.021 "claim_type": "exclusive_write", 00:09:23.021 "zoned": false, 00:09:23.021 "supported_io_types": { 00:09:23.021 "read": true, 00:09:23.021 "write": true, 00:09:23.021 "unmap": true, 00:09:23.021 "flush": true, 00:09:23.021 "reset": true, 00:09:23.021 "nvme_admin": false, 00:09:23.021 "nvme_io": false, 00:09:23.021 "nvme_io_md": false, 00:09:23.021 "write_zeroes": true, 00:09:23.021 "zcopy": true, 00:09:23.021 "get_zone_info": false, 00:09:23.021 "zone_management": false, 00:09:23.021 "zone_append": false, 00:09:23.021 "compare": false, 00:09:23.021 "compare_and_write": false, 00:09:23.021 "abort": true, 00:09:23.021 "seek_hole": false, 00:09:23.021 "seek_data": false, 00:09:23.021 "copy": true, 00:09:23.021 "nvme_iov_md": false 00:09:23.021 }, 00:09:23.021 "memory_domains": [ 00:09:23.021 { 00:09:23.021 "dma_device_id": "system", 00:09:23.021 "dma_device_type": 1 00:09:23.021 }, 00:09:23.021 { 00:09:23.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.021 "dma_device_type": 2 00:09:23.021 } 00:09:23.021 ], 00:09:23.021 "driver_specific": {} 00:09:23.021 } 00:09:23.021 ] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.021 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.021 "name": "Existed_Raid", 00:09:23.021 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:23.021 "strip_size_kb": 64, 00:09:23.021 "state": "configuring", 00:09:23.021 "raid_level": "raid0", 00:09:23.021 "superblock": true, 00:09:23.021 "num_base_bdevs": 3, 00:09:23.021 "num_base_bdevs_discovered": 2, 00:09:23.021 "num_base_bdevs_operational": 3, 00:09:23.021 "base_bdevs_list": [ 00:09:23.021 { 00:09:23.021 "name": "BaseBdev1", 00:09:23.021 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:23.021 "is_configured": true, 00:09:23.021 "data_offset": 2048, 00:09:23.021 "data_size": 63488 00:09:23.021 }, 00:09:23.021 { 00:09:23.021 "name": null, 00:09:23.021 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:23.021 "is_configured": false, 00:09:23.021 "data_offset": 0, 00:09:23.021 "data_size": 63488 00:09:23.021 }, 00:09:23.021 { 00:09:23.021 "name": "BaseBdev3", 00:09:23.021 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:23.021 "is_configured": true, 00:09:23.021 "data_offset": 2048, 00:09:23.021 "data_size": 63488 00:09:23.021 } 00:09:23.021 ] 00:09:23.021 }' 00:09:23.022 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.022 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.590 [2024-11-04 11:41:48.862003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.590 "name": "Existed_Raid", 00:09:23.590 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:23.590 "strip_size_kb": 64, 00:09:23.590 "state": "configuring", 00:09:23.590 "raid_level": "raid0", 00:09:23.590 "superblock": true, 00:09:23.590 "num_base_bdevs": 3, 00:09:23.590 "num_base_bdevs_discovered": 1, 00:09:23.590 "num_base_bdevs_operational": 3, 00:09:23.590 "base_bdevs_list": [ 00:09:23.590 { 00:09:23.590 "name": "BaseBdev1", 00:09:23.590 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:23.590 "is_configured": true, 00:09:23.590 "data_offset": 2048, 00:09:23.590 "data_size": 63488 00:09:23.590 }, 00:09:23.590 { 00:09:23.590 "name": null, 00:09:23.590 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:23.590 "is_configured": false, 00:09:23.590 "data_offset": 0, 00:09:23.590 "data_size": 63488 00:09:23.590 }, 00:09:23.590 { 00:09:23.590 "name": null, 00:09:23.590 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:23.590 "is_configured": false, 00:09:23.590 "data_offset": 0, 00:09:23.590 "data_size": 63488 00:09:23.590 } 00:09:23.590 ] 00:09:23.590 }' 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.590 11:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.850 [2024-11-04 11:41:49.341276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.850 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.110 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.110 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.110 "name": "Existed_Raid", 00:09:24.110 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:24.110 "strip_size_kb": 64, 00:09:24.110 "state": "configuring", 00:09:24.110 "raid_level": "raid0", 00:09:24.110 "superblock": true, 00:09:24.110 "num_base_bdevs": 3, 00:09:24.110 "num_base_bdevs_discovered": 2, 00:09:24.110 "num_base_bdevs_operational": 3, 00:09:24.110 "base_bdevs_list": [ 00:09:24.110 { 00:09:24.110 "name": "BaseBdev1", 00:09:24.110 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:24.110 "is_configured": true, 00:09:24.110 "data_offset": 2048, 00:09:24.110 "data_size": 63488 00:09:24.110 }, 00:09:24.110 { 00:09:24.110 "name": null, 00:09:24.110 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:24.110 "is_configured": false, 00:09:24.110 "data_offset": 0, 00:09:24.110 "data_size": 63488 00:09:24.110 }, 00:09:24.110 { 00:09:24.110 "name": "BaseBdev3", 00:09:24.110 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:24.110 "is_configured": true, 00:09:24.110 "data_offset": 2048, 00:09:24.110 "data_size": 63488 00:09:24.110 } 00:09:24.110 ] 00:09:24.110 }' 00:09:24.110 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.110 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.369 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.369 [2024-11-04 11:41:49.816550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.636 "name": "Existed_Raid", 00:09:24.636 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:24.636 "strip_size_kb": 64, 00:09:24.636 "state": "configuring", 00:09:24.636 "raid_level": "raid0", 00:09:24.636 "superblock": true, 00:09:24.636 "num_base_bdevs": 3, 00:09:24.636 "num_base_bdevs_discovered": 1, 00:09:24.636 "num_base_bdevs_operational": 3, 00:09:24.636 "base_bdevs_list": [ 00:09:24.636 { 00:09:24.636 "name": null, 00:09:24.636 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:24.636 "is_configured": false, 00:09:24.636 "data_offset": 0, 00:09:24.636 "data_size": 63488 00:09:24.636 }, 00:09:24.636 { 00:09:24.636 "name": null, 00:09:24.636 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:24.636 "is_configured": false, 00:09:24.636 "data_offset": 0, 00:09:24.636 "data_size": 63488 00:09:24.636 }, 00:09:24.636 { 00:09:24.636 "name": "BaseBdev3", 00:09:24.636 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:24.636 "is_configured": true, 00:09:24.636 "data_offset": 2048, 00:09:24.636 "data_size": 63488 00:09:24.636 } 00:09:24.636 ] 00:09:24.636 }' 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.636 11:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.896 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.896 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.896 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.896 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 [2024-11-04 11:41:50.451126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.156 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.157 "name": "Existed_Raid", 00:09:25.157 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:25.157 "strip_size_kb": 64, 00:09:25.157 "state": "configuring", 00:09:25.157 "raid_level": "raid0", 00:09:25.157 "superblock": true, 00:09:25.157 "num_base_bdevs": 3, 00:09:25.157 "num_base_bdevs_discovered": 2, 00:09:25.157 "num_base_bdevs_operational": 3, 00:09:25.157 "base_bdevs_list": [ 00:09:25.157 { 00:09:25.157 "name": null, 00:09:25.157 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:25.157 "is_configured": false, 00:09:25.157 "data_offset": 0, 00:09:25.157 "data_size": 63488 00:09:25.157 }, 00:09:25.157 { 00:09:25.157 "name": "BaseBdev2", 00:09:25.157 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:25.157 "is_configured": true, 00:09:25.157 "data_offset": 2048, 00:09:25.157 "data_size": 63488 00:09:25.157 }, 00:09:25.157 { 00:09:25.157 "name": "BaseBdev3", 00:09:25.157 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:25.157 "is_configured": true, 00:09:25.157 "data_offset": 2048, 00:09:25.157 "data_size": 63488 00:09:25.157 } 00:09:25.157 ] 00:09:25.157 }' 00:09:25.157 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.157 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.416 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.416 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.416 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.416 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6335847e-c356-4192-80b9-d6c0103c4e2c 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.682 11:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 [2024-11-04 11:41:51.017949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:25.682 [2024-11-04 11:41:51.018214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.682 [2024-11-04 11:41:51.018232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.682 [2024-11-04 11:41:51.018604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.682 [2024-11-04 11:41:51.018776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.682 [2024-11-04 11:41:51.018795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:25.682 NewBaseBdev 00:09:25.682 [2024-11-04 11:41:51.019001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 [ 00:09:25.682 { 00:09:25.682 "name": "NewBaseBdev", 00:09:25.682 "aliases": [ 00:09:25.682 "6335847e-c356-4192-80b9-d6c0103c4e2c" 00:09:25.682 ], 00:09:25.682 "product_name": "Malloc disk", 00:09:25.682 "block_size": 512, 00:09:25.682 "num_blocks": 65536, 00:09:25.682 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:25.682 "assigned_rate_limits": { 00:09:25.682 "rw_ios_per_sec": 0, 00:09:25.682 "rw_mbytes_per_sec": 0, 00:09:25.682 "r_mbytes_per_sec": 0, 00:09:25.682 "w_mbytes_per_sec": 0 00:09:25.682 }, 00:09:25.682 "claimed": true, 00:09:25.682 "claim_type": "exclusive_write", 00:09:25.682 "zoned": false, 00:09:25.682 "supported_io_types": { 00:09:25.682 "read": true, 00:09:25.682 "write": true, 00:09:25.682 "unmap": true, 00:09:25.682 "flush": true, 00:09:25.682 "reset": true, 00:09:25.682 "nvme_admin": false, 00:09:25.682 "nvme_io": false, 00:09:25.682 "nvme_io_md": false, 00:09:25.682 "write_zeroes": true, 00:09:25.682 "zcopy": true, 00:09:25.682 "get_zone_info": false, 00:09:25.682 "zone_management": false, 00:09:25.682 "zone_append": false, 00:09:25.682 "compare": false, 00:09:25.682 "compare_and_write": false, 00:09:25.682 "abort": true, 00:09:25.682 "seek_hole": false, 00:09:25.682 "seek_data": false, 00:09:25.682 "copy": true, 00:09:25.682 "nvme_iov_md": false 00:09:25.682 }, 00:09:25.682 "memory_domains": [ 00:09:25.682 { 00:09:25.682 "dma_device_id": "system", 00:09:25.682 "dma_device_type": 1 00:09:25.682 }, 00:09:25.682 { 00:09:25.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.682 "dma_device_type": 2 00:09:25.682 } 00:09:25.682 ], 00:09:25.682 "driver_specific": {} 00:09:25.682 } 00:09:25.682 ] 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.683 "name": "Existed_Raid", 00:09:25.683 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:25.683 "strip_size_kb": 64, 00:09:25.683 "state": "online", 00:09:25.683 "raid_level": "raid0", 00:09:25.683 "superblock": true, 00:09:25.683 "num_base_bdevs": 3, 00:09:25.683 "num_base_bdevs_discovered": 3, 00:09:25.683 "num_base_bdevs_operational": 3, 00:09:25.683 "base_bdevs_list": [ 00:09:25.683 { 00:09:25.683 "name": "NewBaseBdev", 00:09:25.683 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "name": "BaseBdev2", 00:09:25.683 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "name": "BaseBdev3", 00:09:25.683 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 } 00:09:25.683 ] 00:09:25.683 }' 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.683 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.260 [2024-11-04 11:41:51.549509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.260 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.260 "name": "Existed_Raid", 00:09:26.260 "aliases": [ 00:09:26.260 "fb616a76-0888-45ee-95d4-55a9eede5071" 00:09:26.260 ], 00:09:26.260 "product_name": "Raid Volume", 00:09:26.260 "block_size": 512, 00:09:26.260 "num_blocks": 190464, 00:09:26.260 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:26.260 "assigned_rate_limits": { 00:09:26.260 "rw_ios_per_sec": 0, 00:09:26.260 "rw_mbytes_per_sec": 0, 00:09:26.260 "r_mbytes_per_sec": 0, 00:09:26.260 "w_mbytes_per_sec": 0 00:09:26.260 }, 00:09:26.260 "claimed": false, 00:09:26.260 "zoned": false, 00:09:26.260 "supported_io_types": { 00:09:26.260 "read": true, 00:09:26.260 "write": true, 00:09:26.260 "unmap": true, 00:09:26.260 "flush": true, 00:09:26.260 "reset": true, 00:09:26.260 "nvme_admin": false, 00:09:26.260 "nvme_io": false, 00:09:26.260 "nvme_io_md": false, 00:09:26.260 "write_zeroes": true, 00:09:26.260 "zcopy": false, 00:09:26.260 "get_zone_info": false, 00:09:26.260 "zone_management": false, 00:09:26.260 "zone_append": false, 00:09:26.260 "compare": false, 00:09:26.260 "compare_and_write": false, 00:09:26.260 "abort": false, 00:09:26.260 "seek_hole": false, 00:09:26.260 "seek_data": false, 00:09:26.260 "copy": false, 00:09:26.260 "nvme_iov_md": false 00:09:26.260 }, 00:09:26.260 "memory_domains": [ 00:09:26.260 { 00:09:26.260 "dma_device_id": "system", 00:09:26.260 "dma_device_type": 1 00:09:26.260 }, 00:09:26.260 { 00:09:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.260 "dma_device_type": 2 00:09:26.260 }, 00:09:26.260 { 00:09:26.260 "dma_device_id": "system", 00:09:26.260 "dma_device_type": 1 00:09:26.260 }, 00:09:26.260 { 00:09:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.260 "dma_device_type": 2 00:09:26.260 }, 00:09:26.260 { 00:09:26.260 "dma_device_id": "system", 00:09:26.260 "dma_device_type": 1 00:09:26.260 }, 00:09:26.260 { 00:09:26.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.260 "dma_device_type": 2 00:09:26.260 } 00:09:26.260 ], 00:09:26.260 "driver_specific": { 00:09:26.260 "raid": { 00:09:26.261 "uuid": "fb616a76-0888-45ee-95d4-55a9eede5071", 00:09:26.261 "strip_size_kb": 64, 00:09:26.261 "state": "online", 00:09:26.261 "raid_level": "raid0", 00:09:26.261 "superblock": true, 00:09:26.261 "num_base_bdevs": 3, 00:09:26.261 "num_base_bdevs_discovered": 3, 00:09:26.261 "num_base_bdevs_operational": 3, 00:09:26.261 "base_bdevs_list": [ 00:09:26.261 { 00:09:26.261 "name": "NewBaseBdev", 00:09:26.261 "uuid": "6335847e-c356-4192-80b9-d6c0103c4e2c", 00:09:26.261 "is_configured": true, 00:09:26.261 "data_offset": 2048, 00:09:26.261 "data_size": 63488 00:09:26.261 }, 00:09:26.261 { 00:09:26.261 "name": "BaseBdev2", 00:09:26.261 "uuid": "60a26e5e-eb43-4f17-ba3b-0fb485b386c9", 00:09:26.261 "is_configured": true, 00:09:26.261 "data_offset": 2048, 00:09:26.261 "data_size": 63488 00:09:26.261 }, 00:09:26.261 { 00:09:26.261 "name": "BaseBdev3", 00:09:26.261 "uuid": "1af2f3f4-66f0-4ee4-8bc6-a3303408e7bb", 00:09:26.261 "is_configured": true, 00:09:26.261 "data_offset": 2048, 00:09:26.261 "data_size": 63488 00:09:26.261 } 00:09:26.261 ] 00:09:26.261 } 00:09:26.261 } 00:09:26.261 }' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:26.261 BaseBdev2 00:09:26.261 BaseBdev3' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.261 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.520 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.521 [2024-11-04 11:41:51.844642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.521 [2024-11-04 11:41:51.844680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.521 [2024-11-04 11:41:51.844777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.521 [2024-11-04 11:41:51.844841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.521 [2024-11-04 11:41:51.844855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64657 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64657 ']' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64657 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64657 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.521 killing process with pid 64657 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64657' 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64657 00:09:26.521 [2024-11-04 11:41:51.892461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.521 11:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64657 00:09:26.780 [2024-11-04 11:41:52.213036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.167 11:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.167 00:09:28.167 real 0m11.194s 00:09:28.167 user 0m17.550s 00:09:28.167 sys 0m1.973s 00:09:28.167 11:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.167 11:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 ************************************ 00:09:28.167 END TEST raid_state_function_test_sb 00:09:28.167 ************************************ 00:09:28.167 11:41:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:28.167 11:41:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:28.167 11:41:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.167 11:41:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 ************************************ 00:09:28.167 START TEST raid_superblock_test 00:09:28.167 ************************************ 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65283 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65283 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65283 ']' 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.167 11:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.167 [2024-11-04 11:41:53.569324] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:28.167 [2024-11-04 11:41:53.569463] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65283 ] 00:09:28.426 [2024-11-04 11:41:53.745321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.426 [2024-11-04 11:41:53.868331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.685 [2024-11-04 11:41:54.073105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.685 [2024-11-04 11:41:54.073158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.944 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.203 malloc1 00:09:29.203 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.203 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.203 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.203 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-04 11:41:54.495221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.204 [2024-11-04 11:41:54.495287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-04 11:41:54.495311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:29.204 [2024-11-04 11:41:54.495320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-04 11:41:54.497512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-04 11:41:54.497545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.204 pt1 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 malloc2 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-04 11:41:54.552448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:29.204 [2024-11-04 11:41:54.552506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-04 11:41:54.552531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:29.204 [2024-11-04 11:41:54.552540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-04 11:41:54.554647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-04 11:41:54.554680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:29.204 pt2 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 malloc3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-04 11:41:54.619888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:29.204 [2024-11-04 11:41:54.619954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.204 [2024-11-04 11:41:54.619978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:29.204 [2024-11-04 11:41:54.619988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.204 [2024-11-04 11:41:54.622390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.204 [2024-11-04 11:41:54.622433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:29.204 pt3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-04 11:41:54.631931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:29.204 [2024-11-04 11:41:54.633970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:29.204 [2024-11-04 11:41:54.634048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:29.204 [2024-11-04 11:41:54.634228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:29.204 [2024-11-04 11:41:54.634266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.204 [2024-11-04 11:41:54.634578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.204 [2024-11-04 11:41:54.634802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:29.204 [2024-11-04 11:41:54.634823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:29.204 [2024-11-04 11:41:54.635019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.204 "name": "raid_bdev1", 00:09:29.204 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:29.204 "strip_size_kb": 64, 00:09:29.204 "state": "online", 00:09:29.204 "raid_level": "raid0", 00:09:29.204 "superblock": true, 00:09:29.204 "num_base_bdevs": 3, 00:09:29.204 "num_base_bdevs_discovered": 3, 00:09:29.204 "num_base_bdevs_operational": 3, 00:09:29.204 "base_bdevs_list": [ 00:09:29.204 { 00:09:29.204 "name": "pt1", 00:09:29.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.204 "is_configured": true, 00:09:29.204 "data_offset": 2048, 00:09:29.204 "data_size": 63488 00:09:29.204 }, 00:09:29.204 { 00:09:29.204 "name": "pt2", 00:09:29.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.204 "is_configured": true, 00:09:29.204 "data_offset": 2048, 00:09:29.204 "data_size": 63488 00:09:29.204 }, 00:09:29.204 { 00:09:29.204 "name": "pt3", 00:09:29.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.204 "is_configured": true, 00:09:29.204 "data_offset": 2048, 00:09:29.204 "data_size": 63488 00:09:29.204 } 00:09:29.204 ] 00:09:29.204 }' 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.204 11:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-11-04 11:41:55.123402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.772 "name": "raid_bdev1", 00:09:29.772 "aliases": [ 00:09:29.772 "fb45e541-bba8-471c-a4ad-9f7894a175f1" 00:09:29.772 ], 00:09:29.772 "product_name": "Raid Volume", 00:09:29.772 "block_size": 512, 00:09:29.772 "num_blocks": 190464, 00:09:29.772 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:29.772 "assigned_rate_limits": { 00:09:29.772 "rw_ios_per_sec": 0, 00:09:29.772 "rw_mbytes_per_sec": 0, 00:09:29.772 "r_mbytes_per_sec": 0, 00:09:29.772 "w_mbytes_per_sec": 0 00:09:29.772 }, 00:09:29.772 "claimed": false, 00:09:29.772 "zoned": false, 00:09:29.772 "supported_io_types": { 00:09:29.772 "read": true, 00:09:29.772 "write": true, 00:09:29.772 "unmap": true, 00:09:29.772 "flush": true, 00:09:29.772 "reset": true, 00:09:29.772 "nvme_admin": false, 00:09:29.772 "nvme_io": false, 00:09:29.772 "nvme_io_md": false, 00:09:29.773 "write_zeroes": true, 00:09:29.773 "zcopy": false, 00:09:29.773 "get_zone_info": false, 00:09:29.773 "zone_management": false, 00:09:29.773 "zone_append": false, 00:09:29.773 "compare": false, 00:09:29.773 "compare_and_write": false, 00:09:29.773 "abort": false, 00:09:29.773 "seek_hole": false, 00:09:29.773 "seek_data": false, 00:09:29.773 "copy": false, 00:09:29.773 "nvme_iov_md": false 00:09:29.773 }, 00:09:29.773 "memory_domains": [ 00:09:29.773 { 00:09:29.773 "dma_device_id": "system", 00:09:29.773 "dma_device_type": 1 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.773 "dma_device_type": 2 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "dma_device_id": "system", 00:09:29.773 "dma_device_type": 1 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.773 "dma_device_type": 2 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "dma_device_id": "system", 00:09:29.773 "dma_device_type": 1 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.773 "dma_device_type": 2 00:09:29.773 } 00:09:29.773 ], 00:09:29.773 "driver_specific": { 00:09:29.773 "raid": { 00:09:29.773 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:29.773 "strip_size_kb": 64, 00:09:29.773 "state": "online", 00:09:29.773 "raid_level": "raid0", 00:09:29.773 "superblock": true, 00:09:29.773 "num_base_bdevs": 3, 00:09:29.773 "num_base_bdevs_discovered": 3, 00:09:29.773 "num_base_bdevs_operational": 3, 00:09:29.773 "base_bdevs_list": [ 00:09:29.773 { 00:09:29.773 "name": "pt1", 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.773 "is_configured": true, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "name": "pt2", 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.773 "is_configured": true, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "name": "pt3", 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.773 "is_configured": true, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 } 00:09:29.773 ] 00:09:29.773 } 00:09:29.773 } 00:09:29.773 }' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.773 pt2 00:09:29.773 pt3' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.773 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:30.032 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 [2024-11-04 11:41:55.430812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fb45e541-bba8-471c-a4ad-9f7894a175f1 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fb45e541-bba8-471c-a4ad-9f7894a175f1 ']' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 [2024-11-04 11:41:55.466491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.033 [2024-11-04 11:41:55.466533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.033 [2024-11-04 11:41:55.466646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.033 [2024-11-04 11:41:55.466716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.033 [2024-11-04 11:41:55.466728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.033 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 [2024-11-04 11:41:55.606297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:30.292 [2024-11-04 11:41:55.608197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:30.292 [2024-11-04 11:41:55.608260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:30.292 [2024-11-04 11:41:55.608311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:30.292 [2024-11-04 11:41:55.608359] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:30.292 [2024-11-04 11:41:55.608378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:30.292 [2024-11-04 11:41:55.608406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.292 [2024-11-04 11:41:55.608419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:30.292 request: 00:09:30.292 { 00:09:30.292 "name": "raid_bdev1", 00:09:30.292 "raid_level": "raid0", 00:09:30.292 "base_bdevs": [ 00:09:30.292 "malloc1", 00:09:30.292 "malloc2", 00:09:30.292 "malloc3" 00:09:30.292 ], 00:09:30.292 "strip_size_kb": 64, 00:09:30.292 "superblock": false, 00:09:30.292 "method": "bdev_raid_create", 00:09:30.292 "req_id": 1 00:09:30.292 } 00:09:30.292 Got JSON-RPC error response 00:09:30.292 response: 00:09:30.292 { 00:09:30.292 "code": -17, 00:09:30.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:30.292 } 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:30.292 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.293 [2024-11-04 11:41:55.670146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.293 [2024-11-04 11:41:55.670219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.293 [2024-11-04 11:41:55.670242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:30.293 [2024-11-04 11:41:55.670252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.293 [2024-11-04 11:41:55.672971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.293 [2024-11-04 11:41:55.673011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.293 [2024-11-04 11:41:55.673118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:30.293 [2024-11-04 11:41:55.673182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.293 pt1 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.293 "name": "raid_bdev1", 00:09:30.293 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:30.293 "strip_size_kb": 64, 00:09:30.293 "state": "configuring", 00:09:30.293 "raid_level": "raid0", 00:09:30.293 "superblock": true, 00:09:30.293 "num_base_bdevs": 3, 00:09:30.293 "num_base_bdevs_discovered": 1, 00:09:30.293 "num_base_bdevs_operational": 3, 00:09:30.293 "base_bdevs_list": [ 00:09:30.293 { 00:09:30.293 "name": "pt1", 00:09:30.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.293 "is_configured": true, 00:09:30.293 "data_offset": 2048, 00:09:30.293 "data_size": 63488 00:09:30.293 }, 00:09:30.293 { 00:09:30.293 "name": null, 00:09:30.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.293 "is_configured": false, 00:09:30.293 "data_offset": 2048, 00:09:30.293 "data_size": 63488 00:09:30.293 }, 00:09:30.293 { 00:09:30.293 "name": null, 00:09:30.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.293 "is_configured": false, 00:09:30.293 "data_offset": 2048, 00:09:30.293 "data_size": 63488 00:09:30.293 } 00:09:30.293 ] 00:09:30.293 }' 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.293 11:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.861 [2024-11-04 11:41:56.185301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.861 [2024-11-04 11:41:56.185404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.861 [2024-11-04 11:41:56.185458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:30.861 [2024-11-04 11:41:56.185468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.861 [2024-11-04 11:41:56.185933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.861 [2024-11-04 11:41:56.185976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.861 [2024-11-04 11:41:56.186074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:30.861 [2024-11-04 11:41:56.186139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.861 pt2 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.861 [2024-11-04 11:41:56.193287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.861 "name": "raid_bdev1", 00:09:30.861 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:30.861 "strip_size_kb": 64, 00:09:30.861 "state": "configuring", 00:09:30.861 "raid_level": "raid0", 00:09:30.861 "superblock": true, 00:09:30.861 "num_base_bdevs": 3, 00:09:30.861 "num_base_bdevs_discovered": 1, 00:09:30.861 "num_base_bdevs_operational": 3, 00:09:30.861 "base_bdevs_list": [ 00:09:30.861 { 00:09:30.861 "name": "pt1", 00:09:30.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.861 "is_configured": true, 00:09:30.861 "data_offset": 2048, 00:09:30.861 "data_size": 63488 00:09:30.861 }, 00:09:30.861 { 00:09:30.861 "name": null, 00:09:30.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.861 "is_configured": false, 00:09:30.861 "data_offset": 0, 00:09:30.861 "data_size": 63488 00:09:30.861 }, 00:09:30.861 { 00:09:30.861 "name": null, 00:09:30.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.861 "is_configured": false, 00:09:30.861 "data_offset": 2048, 00:09:30.861 "data_size": 63488 00:09:30.861 } 00:09:30.861 ] 00:09:30.861 }' 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.861 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.120 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:31.120 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.120 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.120 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.120 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.120 [2024-11-04 11:41:56.636511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.120 [2024-11-04 11:41:56.636602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.120 [2024-11-04 11:41:56.636623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:31.120 [2024-11-04 11:41:56.636635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.120 [2024-11-04 11:41:56.637147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.120 [2024-11-04 11:41:56.637169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.120 [2024-11-04 11:41:56.637253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.120 [2024-11-04 11:41:56.637279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.120 pt2 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 [2024-11-04 11:41:56.648525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.379 [2024-11-04 11:41:56.648606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.379 [2024-11-04 11:41:56.648625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:31.379 [2024-11-04 11:41:56.648636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.379 [2024-11-04 11:41:56.649137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.379 [2024-11-04 11:41:56.649171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.379 [2024-11-04 11:41:56.649261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:31.379 [2024-11-04 11:41:56.649305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.379 [2024-11-04 11:41:56.649469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.379 [2024-11-04 11:41:56.649484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:31.379 [2024-11-04 11:41:56.649768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:31.379 [2024-11-04 11:41:56.649924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.379 [2024-11-04 11:41:56.649934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:31.379 [2024-11-04 11:41:56.650132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.379 pt3 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.379 "name": "raid_bdev1", 00:09:31.379 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:31.379 "strip_size_kb": 64, 00:09:31.379 "state": "online", 00:09:31.379 "raid_level": "raid0", 00:09:31.379 "superblock": true, 00:09:31.379 "num_base_bdevs": 3, 00:09:31.379 "num_base_bdevs_discovered": 3, 00:09:31.379 "num_base_bdevs_operational": 3, 00:09:31.379 "base_bdevs_list": [ 00:09:31.379 { 00:09:31.379 "name": "pt1", 00:09:31.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.379 "is_configured": true, 00:09:31.379 "data_offset": 2048, 00:09:31.379 "data_size": 63488 00:09:31.379 }, 00:09:31.379 { 00:09:31.379 "name": "pt2", 00:09:31.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.379 "is_configured": true, 00:09:31.379 "data_offset": 2048, 00:09:31.379 "data_size": 63488 00:09:31.379 }, 00:09:31.379 { 00:09:31.379 "name": "pt3", 00:09:31.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.379 "is_configured": true, 00:09:31.379 "data_offset": 2048, 00:09:31.379 "data_size": 63488 00:09:31.379 } 00:09:31.379 ] 00:09:31.379 }' 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.379 11:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.638 [2024-11-04 11:41:57.064196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.638 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.638 "name": "raid_bdev1", 00:09:31.638 "aliases": [ 00:09:31.638 "fb45e541-bba8-471c-a4ad-9f7894a175f1" 00:09:31.638 ], 00:09:31.638 "product_name": "Raid Volume", 00:09:31.638 "block_size": 512, 00:09:31.638 "num_blocks": 190464, 00:09:31.638 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:31.638 "assigned_rate_limits": { 00:09:31.638 "rw_ios_per_sec": 0, 00:09:31.638 "rw_mbytes_per_sec": 0, 00:09:31.638 "r_mbytes_per_sec": 0, 00:09:31.638 "w_mbytes_per_sec": 0 00:09:31.638 }, 00:09:31.638 "claimed": false, 00:09:31.638 "zoned": false, 00:09:31.638 "supported_io_types": { 00:09:31.638 "read": true, 00:09:31.638 "write": true, 00:09:31.638 "unmap": true, 00:09:31.638 "flush": true, 00:09:31.638 "reset": true, 00:09:31.638 "nvme_admin": false, 00:09:31.638 "nvme_io": false, 00:09:31.638 "nvme_io_md": false, 00:09:31.638 "write_zeroes": true, 00:09:31.638 "zcopy": false, 00:09:31.638 "get_zone_info": false, 00:09:31.638 "zone_management": false, 00:09:31.638 "zone_append": false, 00:09:31.638 "compare": false, 00:09:31.638 "compare_and_write": false, 00:09:31.638 "abort": false, 00:09:31.638 "seek_hole": false, 00:09:31.638 "seek_data": false, 00:09:31.638 "copy": false, 00:09:31.638 "nvme_iov_md": false 00:09:31.638 }, 00:09:31.638 "memory_domains": [ 00:09:31.638 { 00:09:31.638 "dma_device_id": "system", 00:09:31.638 "dma_device_type": 1 00:09:31.638 }, 00:09:31.638 { 00:09:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.638 "dma_device_type": 2 00:09:31.638 }, 00:09:31.638 { 00:09:31.638 "dma_device_id": "system", 00:09:31.638 "dma_device_type": 1 00:09:31.638 }, 00:09:31.638 { 00:09:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.638 "dma_device_type": 2 00:09:31.638 }, 00:09:31.638 { 00:09:31.638 "dma_device_id": "system", 00:09:31.638 "dma_device_type": 1 00:09:31.638 }, 00:09:31.638 { 00:09:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.638 "dma_device_type": 2 00:09:31.638 } 00:09:31.638 ], 00:09:31.638 "driver_specific": { 00:09:31.638 "raid": { 00:09:31.638 "uuid": "fb45e541-bba8-471c-a4ad-9f7894a175f1", 00:09:31.638 "strip_size_kb": 64, 00:09:31.638 "state": "online", 00:09:31.638 "raid_level": "raid0", 00:09:31.638 "superblock": true, 00:09:31.639 "num_base_bdevs": 3, 00:09:31.639 "num_base_bdevs_discovered": 3, 00:09:31.639 "num_base_bdevs_operational": 3, 00:09:31.639 "base_bdevs_list": [ 00:09:31.639 { 00:09:31.639 "name": "pt1", 00:09:31.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.639 "is_configured": true, 00:09:31.639 "data_offset": 2048, 00:09:31.639 "data_size": 63488 00:09:31.639 }, 00:09:31.639 { 00:09:31.639 "name": "pt2", 00:09:31.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.639 "is_configured": true, 00:09:31.639 "data_offset": 2048, 00:09:31.639 "data_size": 63488 00:09:31.639 }, 00:09:31.639 { 00:09:31.639 "name": "pt3", 00:09:31.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.639 "is_configured": true, 00:09:31.639 "data_offset": 2048, 00:09:31.639 "data_size": 63488 00:09:31.639 } 00:09:31.639 ] 00:09:31.639 } 00:09:31.639 } 00:09:31.639 }' 00:09:31.639 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.639 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:31.639 pt2 00:09:31.639 pt3' 00:09:31.639 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.899 [2024-11-04 11:41:57.315682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fb45e541-bba8-471c-a4ad-9f7894a175f1 '!=' fb45e541-bba8-471c-a4ad-9f7894a175f1 ']' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65283 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65283 ']' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65283 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65283 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.899 killing process with pid 65283 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65283' 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65283 00:09:31.899 [2024-11-04 11:41:57.380052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.899 [2024-11-04 11:41:57.380195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.899 [2024-11-04 11:41:57.380275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.899 [2024-11-04 11:41:57.380292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.899 11:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65283 00:09:32.467 [2024-11-04 11:41:57.718422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.844 11:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:33.844 00:09:33.844 real 0m5.473s 00:09:33.844 user 0m7.835s 00:09:33.844 sys 0m0.889s 00:09:33.844 11:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.844 11:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.844 ************************************ 00:09:33.844 END TEST raid_superblock_test 00:09:33.844 ************************************ 00:09:33.844 11:41:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:33.844 11:41:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:33.844 11:41:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.844 11:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.844 ************************************ 00:09:33.844 START TEST raid_read_error_test 00:09:33.844 ************************************ 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sYrH2zuEkV 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65536 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65536 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65536 ']' 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.844 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.845 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.845 [2024-11-04 11:41:59.129384] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:33.845 [2024-11-04 11:41:59.129586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65536 ] 00:09:33.845 [2024-11-04 11:41:59.316119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.104 [2024-11-04 11:41:59.441496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.362 [2024-11-04 11:41:59.672155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.362 [2024-11-04 11:41:59.672235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.621 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:34.621 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:34.621 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.621 11:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.621 11:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 BaseBdev1_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 true 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 [2024-11-04 11:42:00.054804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.621 [2024-11-04 11:42:00.054862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.621 [2024-11-04 11:42:00.054885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:34.621 [2024-11-04 11:42:00.054897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.621 [2024-11-04 11:42:00.057446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.621 [2024-11-04 11:42:00.057498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.621 BaseBdev1 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 BaseBdev2_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 true 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.621 [2024-11-04 11:42:00.125829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.621 [2024-11-04 11:42:00.125909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.621 [2024-11-04 11:42:00.125929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:34.621 [2024-11-04 11:42:00.125941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.621 [2024-11-04 11:42:00.128407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.621 [2024-11-04 11:42:00.128466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.621 BaseBdev2 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.621 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.881 BaseBdev3_malloc 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.881 true 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.881 [2024-11-04 11:42:00.206226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:34.881 [2024-11-04 11:42:00.206288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.881 [2024-11-04 11:42:00.206308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:34.881 [2024-11-04 11:42:00.206318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.881 [2024-11-04 11:42:00.208545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.881 [2024-11-04 11:42:00.208581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:34.881 BaseBdev3 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.881 [2024-11-04 11:42:00.218278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.881 [2024-11-04 11:42:00.220170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.881 [2024-11-04 11:42:00.220275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.881 [2024-11-04 11:42:00.220491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.881 [2024-11-04 11:42:00.220526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.881 [2024-11-04 11:42:00.220827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:34.881 [2024-11-04 11:42:00.221023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.881 [2024-11-04 11:42:00.221046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:34.881 [2024-11-04 11:42:00.221239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.881 "name": "raid_bdev1", 00:09:34.881 "uuid": "f9c600ab-0ac5-4155-93ac-42740c5a2107", 00:09:34.881 "strip_size_kb": 64, 00:09:34.881 "state": "online", 00:09:34.881 "raid_level": "raid0", 00:09:34.881 "superblock": true, 00:09:34.881 "num_base_bdevs": 3, 00:09:34.881 "num_base_bdevs_discovered": 3, 00:09:34.881 "num_base_bdevs_operational": 3, 00:09:34.881 "base_bdevs_list": [ 00:09:34.881 { 00:09:34.881 "name": "BaseBdev1", 00:09:34.881 "uuid": "42b1654f-ab70-5069-b106-3f018ea1b832", 00:09:34.881 "is_configured": true, 00:09:34.881 "data_offset": 2048, 00:09:34.881 "data_size": 63488 00:09:34.881 }, 00:09:34.881 { 00:09:34.881 "name": "BaseBdev2", 00:09:34.881 "uuid": "9685415d-297b-5be5-b44d-8119dff70d24", 00:09:34.881 "is_configured": true, 00:09:34.881 "data_offset": 2048, 00:09:34.881 "data_size": 63488 00:09:34.881 }, 00:09:34.881 { 00:09:34.881 "name": "BaseBdev3", 00:09:34.881 "uuid": "3394102e-6cd8-5950-9603-386cb69a4133", 00:09:34.881 "is_configured": true, 00:09:34.881 "data_offset": 2048, 00:09:34.881 "data_size": 63488 00:09:34.881 } 00:09:34.881 ] 00:09:34.881 }' 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.881 11:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.450 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.450 11:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.450 [2024-11-04 11:42:00.798964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.390 "name": "raid_bdev1", 00:09:36.390 "uuid": "f9c600ab-0ac5-4155-93ac-42740c5a2107", 00:09:36.390 "strip_size_kb": 64, 00:09:36.390 "state": "online", 00:09:36.390 "raid_level": "raid0", 00:09:36.390 "superblock": true, 00:09:36.390 "num_base_bdevs": 3, 00:09:36.390 "num_base_bdevs_discovered": 3, 00:09:36.390 "num_base_bdevs_operational": 3, 00:09:36.390 "base_bdevs_list": [ 00:09:36.390 { 00:09:36.390 "name": "BaseBdev1", 00:09:36.390 "uuid": "42b1654f-ab70-5069-b106-3f018ea1b832", 00:09:36.390 "is_configured": true, 00:09:36.390 "data_offset": 2048, 00:09:36.390 "data_size": 63488 00:09:36.390 }, 00:09:36.390 { 00:09:36.390 "name": "BaseBdev2", 00:09:36.390 "uuid": "9685415d-297b-5be5-b44d-8119dff70d24", 00:09:36.390 "is_configured": true, 00:09:36.390 "data_offset": 2048, 00:09:36.390 "data_size": 63488 00:09:36.390 }, 00:09:36.390 { 00:09:36.390 "name": "BaseBdev3", 00:09:36.390 "uuid": "3394102e-6cd8-5950-9603-386cb69a4133", 00:09:36.390 "is_configured": true, 00:09:36.390 "data_offset": 2048, 00:09:36.390 "data_size": 63488 00:09:36.390 } 00:09:36.390 ] 00:09:36.390 }' 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.390 11:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.960 [2024-11-04 11:42:02.183560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.960 [2024-11-04 11:42:02.183601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.960 [2024-11-04 11:42:02.186677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.960 [2024-11-04 11:42:02.186731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.960 [2024-11-04 11:42:02.186771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.960 [2024-11-04 11:42:02.186782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:36.960 { 00:09:36.960 "results": [ 00:09:36.960 { 00:09:36.960 "job": "raid_bdev1", 00:09:36.960 "core_mask": "0x1", 00:09:36.960 "workload": "randrw", 00:09:36.960 "percentage": 50, 00:09:36.960 "status": "finished", 00:09:36.960 "queue_depth": 1, 00:09:36.960 "io_size": 131072, 00:09:36.960 "runtime": 1.385242, 00:09:36.960 "iops": 14143.377113890569, 00:09:36.960 "mibps": 1767.922139236321, 00:09:36.960 "io_failed": 1, 00:09:36.960 "io_timeout": 0, 00:09:36.960 "avg_latency_us": 98.29297398567397, 00:09:36.960 "min_latency_us": 23.811353711790392, 00:09:36.960 "max_latency_us": 1523.926637554585 00:09:36.960 } 00:09:36.960 ], 00:09:36.960 "core_count": 1 00:09:36.960 } 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65536 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65536 ']' 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65536 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65536 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65536' 00:09:36.960 killing process with pid 65536 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65536 00:09:36.960 [2024-11-04 11:42:02.231094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.960 11:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65536 00:09:37.219 [2024-11-04 11:42:02.486181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sYrH2zuEkV 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:38.597 00:09:38.597 real 0m4.724s 00:09:38.597 user 0m5.648s 00:09:38.597 sys 0m0.602s 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.597 11:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.597 ************************************ 00:09:38.597 END TEST raid_read_error_test 00:09:38.597 ************************************ 00:09:38.597 11:42:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:38.597 11:42:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:38.597 11:42:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.597 11:42:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.597 ************************************ 00:09:38.597 START TEST raid_write_error_test 00:09:38.597 ************************************ 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7qMloABYFk 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65687 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65687 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65687 ']' 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:38.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:38.597 11:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.597 [2024-11-04 11:42:03.910922] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:38.597 [2024-11-04 11:42:03.911059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65687 ] 00:09:38.597 [2024-11-04 11:42:04.086473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.856 [2024-11-04 11:42:04.207875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.115 [2024-11-04 11:42:04.427512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.115 [2024-11-04 11:42:04.427558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.375 BaseBdev1_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.375 true 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.375 [2024-11-04 11:42:04.838005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:39.375 [2024-11-04 11:42:04.838068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.375 [2024-11-04 11:42:04.838093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:39.375 [2024-11-04 11:42:04.838103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.375 [2024-11-04 11:42:04.840307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.375 [2024-11-04 11:42:04.840345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:39.375 BaseBdev1 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.375 BaseBdev2_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.375 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 true 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 [2024-11-04 11:42:04.905651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:39.636 [2024-11-04 11:42:04.905728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.636 [2024-11-04 11:42:04.905752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:39.636 [2024-11-04 11:42:04.905765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.636 [2024-11-04 11:42:04.908192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.636 [2024-11-04 11:42:04.908290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:39.636 BaseBdev2 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 BaseBdev3_malloc 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 true 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 [2024-11-04 11:42:04.986022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:39.636 [2024-11-04 11:42:04.986137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.636 [2024-11-04 11:42:04.986195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:39.636 [2024-11-04 11:42:04.986208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.636 [2024-11-04 11:42:04.988371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.636 [2024-11-04 11:42:04.988424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:39.636 BaseBdev3 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.636 11:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.636 [2024-11-04 11:42:04.998091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.636 [2024-11-04 11:42:04.999974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.636 [2024-11-04 11:42:05.000082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.636 [2024-11-04 11:42:05.000294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.636 [2024-11-04 11:42:05.000309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.636 [2024-11-04 11:42:05.000617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:39.636 [2024-11-04 11:42:05.000824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.636 [2024-11-04 11:42:05.000846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:39.636 [2024-11-04 11:42:05.001024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.636 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.637 "name": "raid_bdev1", 00:09:39.637 "uuid": "a9f50257-62ff-4a96-8f96-92423717d8bf", 00:09:39.637 "strip_size_kb": 64, 00:09:39.637 "state": "online", 00:09:39.637 "raid_level": "raid0", 00:09:39.637 "superblock": true, 00:09:39.637 "num_base_bdevs": 3, 00:09:39.637 "num_base_bdevs_discovered": 3, 00:09:39.637 "num_base_bdevs_operational": 3, 00:09:39.637 "base_bdevs_list": [ 00:09:39.637 { 00:09:39.637 "name": "BaseBdev1", 00:09:39.637 "uuid": "2942d854-9d2d-5f98-9339-d796249c81c5", 00:09:39.637 "is_configured": true, 00:09:39.637 "data_offset": 2048, 00:09:39.637 "data_size": 63488 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "name": "BaseBdev2", 00:09:39.637 "uuid": "69f725f7-8bc2-50af-bbef-b73de11b2d23", 00:09:39.637 "is_configured": true, 00:09:39.637 "data_offset": 2048, 00:09:39.637 "data_size": 63488 00:09:39.637 }, 00:09:39.637 { 00:09:39.637 "name": "BaseBdev3", 00:09:39.637 "uuid": "f11e2e42-49ad-537d-84cd-5e9153241198", 00:09:39.637 "is_configured": true, 00:09:39.637 "data_offset": 2048, 00:09:39.637 "data_size": 63488 00:09:39.637 } 00:09:39.637 ] 00:09:39.637 }' 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.637 11:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.207 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:40.207 11:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:40.207 [2024-11-04 11:42:05.550474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.147 "name": "raid_bdev1", 00:09:41.147 "uuid": "a9f50257-62ff-4a96-8f96-92423717d8bf", 00:09:41.147 "strip_size_kb": 64, 00:09:41.147 "state": "online", 00:09:41.147 "raid_level": "raid0", 00:09:41.147 "superblock": true, 00:09:41.147 "num_base_bdevs": 3, 00:09:41.147 "num_base_bdevs_discovered": 3, 00:09:41.147 "num_base_bdevs_operational": 3, 00:09:41.147 "base_bdevs_list": [ 00:09:41.147 { 00:09:41.147 "name": "BaseBdev1", 00:09:41.147 "uuid": "2942d854-9d2d-5f98-9339-d796249c81c5", 00:09:41.147 "is_configured": true, 00:09:41.147 "data_offset": 2048, 00:09:41.147 "data_size": 63488 00:09:41.147 }, 00:09:41.147 { 00:09:41.147 "name": "BaseBdev2", 00:09:41.147 "uuid": "69f725f7-8bc2-50af-bbef-b73de11b2d23", 00:09:41.147 "is_configured": true, 00:09:41.147 "data_offset": 2048, 00:09:41.147 "data_size": 63488 00:09:41.147 }, 00:09:41.147 { 00:09:41.147 "name": "BaseBdev3", 00:09:41.147 "uuid": "f11e2e42-49ad-537d-84cd-5e9153241198", 00:09:41.147 "is_configured": true, 00:09:41.147 "data_offset": 2048, 00:09:41.147 "data_size": 63488 00:09:41.147 } 00:09:41.147 ] 00:09:41.147 }' 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.147 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.716 [2024-11-04 11:42:06.935140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.716 [2024-11-04 11:42:06.935196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.716 [2024-11-04 11:42:06.938438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.716 [2024-11-04 11:42:06.938492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.716 [2024-11-04 11:42:06.938535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.716 [2024-11-04 11:42:06.938545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:41.716 { 00:09:41.716 "results": [ 00:09:41.716 { 00:09:41.716 "job": "raid_bdev1", 00:09:41.716 "core_mask": "0x1", 00:09:41.716 "workload": "randrw", 00:09:41.716 "percentage": 50, 00:09:41.716 "status": "finished", 00:09:41.716 "queue_depth": 1, 00:09:41.716 "io_size": 131072, 00:09:41.716 "runtime": 1.385428, 00:09:41.716 "iops": 14871.938491209936, 00:09:41.716 "mibps": 1858.992311401242, 00:09:41.716 "io_failed": 1, 00:09:41.716 "io_timeout": 0, 00:09:41.716 "avg_latency_us": 93.43262281063336, 00:09:41.716 "min_latency_us": 19.116157205240174, 00:09:41.716 "max_latency_us": 1631.2454148471616 00:09:41.716 } 00:09:41.716 ], 00:09:41.716 "core_count": 1 00:09:41.716 } 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65687 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65687 ']' 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65687 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65687 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65687' 00:09:41.716 killing process with pid 65687 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65687 00:09:41.716 [2024-11-04 11:42:06.975431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.716 11:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65687 00:09:41.716 [2024-11-04 11:42:07.233843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7qMloABYFk 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:43.095 00:09:43.095 real 0m4.666s 00:09:43.095 user 0m5.546s 00:09:43.095 sys 0m0.575s 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.095 ************************************ 00:09:43.095 END TEST raid_write_error_test 00:09:43.095 ************************************ 00:09:43.095 11:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.095 11:42:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:43.095 11:42:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:43.095 11:42:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:43.095 11:42:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.095 11:42:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.095 ************************************ 00:09:43.095 START TEST raid_state_function_test 00:09:43.095 ************************************ 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.095 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65826 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65826' 00:09:43.096 Process raid pid: 65826 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65826 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65826 ']' 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.096 11:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.356 [2024-11-04 11:42:08.644042] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:43.356 [2024-11-04 11:42:08.644205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.356 [2024-11-04 11:42:08.819656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.617 [2024-11-04 11:42:08.946480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.909 [2024-11-04 11:42:09.180489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.909 [2024-11-04 11:42:09.180541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.169 [2024-11-04 11:42:09.534827] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.169 [2024-11-04 11:42:09.534888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.169 [2024-11-04 11:42:09.534900] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.169 [2024-11-04 11:42:09.534911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.169 [2024-11-04 11:42:09.534918] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.169 [2024-11-04 11:42:09.534928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.169 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.169 "name": "Existed_Raid", 00:09:44.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.169 "strip_size_kb": 64, 00:09:44.169 "state": "configuring", 00:09:44.169 "raid_level": "concat", 00:09:44.169 "superblock": false, 00:09:44.169 "num_base_bdevs": 3, 00:09:44.169 "num_base_bdevs_discovered": 0, 00:09:44.169 "num_base_bdevs_operational": 3, 00:09:44.169 "base_bdevs_list": [ 00:09:44.169 { 00:09:44.169 "name": "BaseBdev1", 00:09:44.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.169 "is_configured": false, 00:09:44.169 "data_offset": 0, 00:09:44.169 "data_size": 0 00:09:44.169 }, 00:09:44.169 { 00:09:44.169 "name": "BaseBdev2", 00:09:44.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.170 "is_configured": false, 00:09:44.170 "data_offset": 0, 00:09:44.170 "data_size": 0 00:09:44.170 }, 00:09:44.170 { 00:09:44.170 "name": "BaseBdev3", 00:09:44.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.170 "is_configured": false, 00:09:44.170 "data_offset": 0, 00:09:44.170 "data_size": 0 00:09:44.170 } 00:09:44.170 ] 00:09:44.170 }' 00:09:44.170 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.170 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 [2024-11-04 11:42:09.958054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.737 [2024-11-04 11:42:09.958146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 [2024-11-04 11:42:09.970031] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.737 [2024-11-04 11:42:09.970117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.737 [2024-11-04 11:42:09.970164] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.737 [2024-11-04 11:42:09.970191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.737 [2024-11-04 11:42:09.970212] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.737 [2024-11-04 11:42:09.970236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.737 11:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 [2024-11-04 11:42:10.021976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.737 BaseBdev1 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.737 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.737 [ 00:09:44.737 { 00:09:44.737 "name": "BaseBdev1", 00:09:44.737 "aliases": [ 00:09:44.737 "84dee594-0a92-41c0-80ef-6d7ca502efe5" 00:09:44.737 ], 00:09:44.737 "product_name": "Malloc disk", 00:09:44.737 "block_size": 512, 00:09:44.737 "num_blocks": 65536, 00:09:44.737 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:44.737 "assigned_rate_limits": { 00:09:44.737 "rw_ios_per_sec": 0, 00:09:44.737 "rw_mbytes_per_sec": 0, 00:09:44.737 "r_mbytes_per_sec": 0, 00:09:44.737 "w_mbytes_per_sec": 0 00:09:44.737 }, 00:09:44.737 "claimed": true, 00:09:44.737 "claim_type": "exclusive_write", 00:09:44.737 "zoned": false, 00:09:44.737 "supported_io_types": { 00:09:44.737 "read": true, 00:09:44.737 "write": true, 00:09:44.737 "unmap": true, 00:09:44.737 "flush": true, 00:09:44.737 "reset": true, 00:09:44.738 "nvme_admin": false, 00:09:44.738 "nvme_io": false, 00:09:44.738 "nvme_io_md": false, 00:09:44.738 "write_zeroes": true, 00:09:44.738 "zcopy": true, 00:09:44.738 "get_zone_info": false, 00:09:44.738 "zone_management": false, 00:09:44.738 "zone_append": false, 00:09:44.738 "compare": false, 00:09:44.738 "compare_and_write": false, 00:09:44.738 "abort": true, 00:09:44.738 "seek_hole": false, 00:09:44.738 "seek_data": false, 00:09:44.738 "copy": true, 00:09:44.738 "nvme_iov_md": false 00:09:44.738 }, 00:09:44.738 "memory_domains": [ 00:09:44.738 { 00:09:44.738 "dma_device_id": "system", 00:09:44.738 "dma_device_type": 1 00:09:44.738 }, 00:09:44.738 { 00:09:44.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.738 "dma_device_type": 2 00:09:44.738 } 00:09:44.738 ], 00:09:44.738 "driver_specific": {} 00:09:44.738 } 00:09:44.738 ] 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.738 "name": "Existed_Raid", 00:09:44.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.738 "strip_size_kb": 64, 00:09:44.738 "state": "configuring", 00:09:44.738 "raid_level": "concat", 00:09:44.738 "superblock": false, 00:09:44.738 "num_base_bdevs": 3, 00:09:44.738 "num_base_bdevs_discovered": 1, 00:09:44.738 "num_base_bdevs_operational": 3, 00:09:44.738 "base_bdevs_list": [ 00:09:44.738 { 00:09:44.738 "name": "BaseBdev1", 00:09:44.738 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:44.738 "is_configured": true, 00:09:44.738 "data_offset": 0, 00:09:44.738 "data_size": 65536 00:09:44.738 }, 00:09:44.738 { 00:09:44.738 "name": "BaseBdev2", 00:09:44.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.738 "is_configured": false, 00:09:44.738 "data_offset": 0, 00:09:44.738 "data_size": 0 00:09:44.738 }, 00:09:44.738 { 00:09:44.738 "name": "BaseBdev3", 00:09:44.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.738 "is_configured": false, 00:09:44.738 "data_offset": 0, 00:09:44.738 "data_size": 0 00:09:44.738 } 00:09:44.738 ] 00:09:44.738 }' 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.738 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.307 [2024-11-04 11:42:10.541194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.307 [2024-11-04 11:42:10.541332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.307 [2024-11-04 11:42:10.553240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.307 [2024-11-04 11:42:10.555307] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.307 [2024-11-04 11:42:10.555407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.307 [2024-11-04 11:42:10.555481] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.307 [2024-11-04 11:42:10.555527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.307 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.308 "name": "Existed_Raid", 00:09:45.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.308 "strip_size_kb": 64, 00:09:45.308 "state": "configuring", 00:09:45.308 "raid_level": "concat", 00:09:45.308 "superblock": false, 00:09:45.308 "num_base_bdevs": 3, 00:09:45.308 "num_base_bdevs_discovered": 1, 00:09:45.308 "num_base_bdevs_operational": 3, 00:09:45.308 "base_bdevs_list": [ 00:09:45.308 { 00:09:45.308 "name": "BaseBdev1", 00:09:45.308 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:45.308 "is_configured": true, 00:09:45.308 "data_offset": 0, 00:09:45.308 "data_size": 65536 00:09:45.308 }, 00:09:45.308 { 00:09:45.308 "name": "BaseBdev2", 00:09:45.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.308 "is_configured": false, 00:09:45.308 "data_offset": 0, 00:09:45.308 "data_size": 0 00:09:45.308 }, 00:09:45.308 { 00:09:45.308 "name": "BaseBdev3", 00:09:45.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.308 "is_configured": false, 00:09:45.308 "data_offset": 0, 00:09:45.308 "data_size": 0 00:09:45.308 } 00:09:45.308 ] 00:09:45.308 }' 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.308 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 11:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.567 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 11:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 [2024-11-04 11:42:11.019718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.567 BaseBdev2 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 [ 00:09:45.567 { 00:09:45.567 "name": "BaseBdev2", 00:09:45.567 "aliases": [ 00:09:45.567 "f2c76508-04da-412d-9ff7-256b3102b230" 00:09:45.567 ], 00:09:45.567 "product_name": "Malloc disk", 00:09:45.567 "block_size": 512, 00:09:45.567 "num_blocks": 65536, 00:09:45.567 "uuid": "f2c76508-04da-412d-9ff7-256b3102b230", 00:09:45.567 "assigned_rate_limits": { 00:09:45.567 "rw_ios_per_sec": 0, 00:09:45.567 "rw_mbytes_per_sec": 0, 00:09:45.567 "r_mbytes_per_sec": 0, 00:09:45.567 "w_mbytes_per_sec": 0 00:09:45.567 }, 00:09:45.567 "claimed": true, 00:09:45.567 "claim_type": "exclusive_write", 00:09:45.567 "zoned": false, 00:09:45.567 "supported_io_types": { 00:09:45.567 "read": true, 00:09:45.567 "write": true, 00:09:45.567 "unmap": true, 00:09:45.567 "flush": true, 00:09:45.567 "reset": true, 00:09:45.567 "nvme_admin": false, 00:09:45.567 "nvme_io": false, 00:09:45.567 "nvme_io_md": false, 00:09:45.567 "write_zeroes": true, 00:09:45.567 "zcopy": true, 00:09:45.567 "get_zone_info": false, 00:09:45.567 "zone_management": false, 00:09:45.567 "zone_append": false, 00:09:45.567 "compare": false, 00:09:45.567 "compare_and_write": false, 00:09:45.567 "abort": true, 00:09:45.567 "seek_hole": false, 00:09:45.567 "seek_data": false, 00:09:45.567 "copy": true, 00:09:45.567 "nvme_iov_md": false 00:09:45.567 }, 00:09:45.567 "memory_domains": [ 00:09:45.567 { 00:09:45.567 "dma_device_id": "system", 00:09:45.567 "dma_device_type": 1 00:09:45.567 }, 00:09:45.567 { 00:09:45.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.567 "dma_device_type": 2 00:09:45.567 } 00:09:45.567 ], 00:09:45.567 "driver_specific": {} 00:09:45.567 } 00:09:45.567 ] 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.567 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.827 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.827 "name": "Existed_Raid", 00:09:45.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.827 "strip_size_kb": 64, 00:09:45.827 "state": "configuring", 00:09:45.827 "raid_level": "concat", 00:09:45.827 "superblock": false, 00:09:45.827 "num_base_bdevs": 3, 00:09:45.827 "num_base_bdevs_discovered": 2, 00:09:45.827 "num_base_bdevs_operational": 3, 00:09:45.827 "base_bdevs_list": [ 00:09:45.827 { 00:09:45.827 "name": "BaseBdev1", 00:09:45.827 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:45.827 "is_configured": true, 00:09:45.827 "data_offset": 0, 00:09:45.827 "data_size": 65536 00:09:45.827 }, 00:09:45.827 { 00:09:45.827 "name": "BaseBdev2", 00:09:45.827 "uuid": "f2c76508-04da-412d-9ff7-256b3102b230", 00:09:45.827 "is_configured": true, 00:09:45.827 "data_offset": 0, 00:09:45.827 "data_size": 65536 00:09:45.827 }, 00:09:45.827 { 00:09:45.827 "name": "BaseBdev3", 00:09:45.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.827 "is_configured": false, 00:09:45.827 "data_offset": 0, 00:09:45.827 "data_size": 0 00:09:45.827 } 00:09:45.827 ] 00:09:45.827 }' 00:09:45.827 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.827 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 [2024-11-04 11:42:11.558953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.088 [2024-11-04 11:42:11.559007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.088 [2024-11-04 11:42:11.559019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:46.088 [2024-11-04 11:42:11.559283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.088 [2024-11-04 11:42:11.559487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.088 [2024-11-04 11:42:11.559499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.088 [2024-11-04 11:42:11.559786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.088 BaseBdev3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 [ 00:09:46.088 { 00:09:46.088 "name": "BaseBdev3", 00:09:46.088 "aliases": [ 00:09:46.088 "f54da639-ce78-4d7e-9864-7fd900476a92" 00:09:46.088 ], 00:09:46.088 "product_name": "Malloc disk", 00:09:46.088 "block_size": 512, 00:09:46.088 "num_blocks": 65536, 00:09:46.088 "uuid": "f54da639-ce78-4d7e-9864-7fd900476a92", 00:09:46.088 "assigned_rate_limits": { 00:09:46.088 "rw_ios_per_sec": 0, 00:09:46.088 "rw_mbytes_per_sec": 0, 00:09:46.088 "r_mbytes_per_sec": 0, 00:09:46.088 "w_mbytes_per_sec": 0 00:09:46.088 }, 00:09:46.088 "claimed": true, 00:09:46.088 "claim_type": "exclusive_write", 00:09:46.088 "zoned": false, 00:09:46.088 "supported_io_types": { 00:09:46.088 "read": true, 00:09:46.088 "write": true, 00:09:46.088 "unmap": true, 00:09:46.088 "flush": true, 00:09:46.088 "reset": true, 00:09:46.088 "nvme_admin": false, 00:09:46.088 "nvme_io": false, 00:09:46.088 "nvme_io_md": false, 00:09:46.088 "write_zeroes": true, 00:09:46.088 "zcopy": true, 00:09:46.088 "get_zone_info": false, 00:09:46.088 "zone_management": false, 00:09:46.088 "zone_append": false, 00:09:46.088 "compare": false, 00:09:46.088 "compare_and_write": false, 00:09:46.088 "abort": true, 00:09:46.088 "seek_hole": false, 00:09:46.088 "seek_data": false, 00:09:46.088 "copy": true, 00:09:46.088 "nvme_iov_md": false 00:09:46.088 }, 00:09:46.088 "memory_domains": [ 00:09:46.088 { 00:09:46.088 "dma_device_id": "system", 00:09:46.088 "dma_device_type": 1 00:09:46.088 }, 00:09:46.088 { 00:09:46.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.088 "dma_device_type": 2 00:09:46.088 } 00:09:46.088 ], 00:09:46.088 "driver_specific": {} 00:09:46.088 } 00:09:46.088 ] 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.088 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.348 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.348 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.348 "name": "Existed_Raid", 00:09:46.348 "uuid": "d1e1b2b9-c796-4adf-8201-bd0519f141c5", 00:09:46.348 "strip_size_kb": 64, 00:09:46.348 "state": "online", 00:09:46.348 "raid_level": "concat", 00:09:46.348 "superblock": false, 00:09:46.348 "num_base_bdevs": 3, 00:09:46.348 "num_base_bdevs_discovered": 3, 00:09:46.348 "num_base_bdevs_operational": 3, 00:09:46.348 "base_bdevs_list": [ 00:09:46.348 { 00:09:46.348 "name": "BaseBdev1", 00:09:46.348 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:46.348 "is_configured": true, 00:09:46.348 "data_offset": 0, 00:09:46.348 "data_size": 65536 00:09:46.348 }, 00:09:46.348 { 00:09:46.348 "name": "BaseBdev2", 00:09:46.348 "uuid": "f2c76508-04da-412d-9ff7-256b3102b230", 00:09:46.348 "is_configured": true, 00:09:46.348 "data_offset": 0, 00:09:46.348 "data_size": 65536 00:09:46.348 }, 00:09:46.348 { 00:09:46.348 "name": "BaseBdev3", 00:09:46.348 "uuid": "f54da639-ce78-4d7e-9864-7fd900476a92", 00:09:46.348 "is_configured": true, 00:09:46.348 "data_offset": 0, 00:09:46.348 "data_size": 65536 00:09:46.348 } 00:09:46.348 ] 00:09:46.348 }' 00:09:46.348 11:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.348 11:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.608 [2024-11-04 11:42:12.078529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.608 "name": "Existed_Raid", 00:09:46.608 "aliases": [ 00:09:46.608 "d1e1b2b9-c796-4adf-8201-bd0519f141c5" 00:09:46.608 ], 00:09:46.608 "product_name": "Raid Volume", 00:09:46.608 "block_size": 512, 00:09:46.608 "num_blocks": 196608, 00:09:46.608 "uuid": "d1e1b2b9-c796-4adf-8201-bd0519f141c5", 00:09:46.608 "assigned_rate_limits": { 00:09:46.608 "rw_ios_per_sec": 0, 00:09:46.608 "rw_mbytes_per_sec": 0, 00:09:46.608 "r_mbytes_per_sec": 0, 00:09:46.608 "w_mbytes_per_sec": 0 00:09:46.608 }, 00:09:46.608 "claimed": false, 00:09:46.608 "zoned": false, 00:09:46.608 "supported_io_types": { 00:09:46.608 "read": true, 00:09:46.608 "write": true, 00:09:46.608 "unmap": true, 00:09:46.608 "flush": true, 00:09:46.608 "reset": true, 00:09:46.608 "nvme_admin": false, 00:09:46.608 "nvme_io": false, 00:09:46.608 "nvme_io_md": false, 00:09:46.608 "write_zeroes": true, 00:09:46.608 "zcopy": false, 00:09:46.608 "get_zone_info": false, 00:09:46.608 "zone_management": false, 00:09:46.608 "zone_append": false, 00:09:46.608 "compare": false, 00:09:46.608 "compare_and_write": false, 00:09:46.608 "abort": false, 00:09:46.608 "seek_hole": false, 00:09:46.608 "seek_data": false, 00:09:46.608 "copy": false, 00:09:46.608 "nvme_iov_md": false 00:09:46.608 }, 00:09:46.608 "memory_domains": [ 00:09:46.608 { 00:09:46.608 "dma_device_id": "system", 00:09:46.608 "dma_device_type": 1 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.608 "dma_device_type": 2 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "dma_device_id": "system", 00:09:46.608 "dma_device_type": 1 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.608 "dma_device_type": 2 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "dma_device_id": "system", 00:09:46.608 "dma_device_type": 1 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.608 "dma_device_type": 2 00:09:46.608 } 00:09:46.608 ], 00:09:46.608 "driver_specific": { 00:09:46.608 "raid": { 00:09:46.608 "uuid": "d1e1b2b9-c796-4adf-8201-bd0519f141c5", 00:09:46.608 "strip_size_kb": 64, 00:09:46.608 "state": "online", 00:09:46.608 "raid_level": "concat", 00:09:46.608 "superblock": false, 00:09:46.608 "num_base_bdevs": 3, 00:09:46.608 "num_base_bdevs_discovered": 3, 00:09:46.608 "num_base_bdevs_operational": 3, 00:09:46.608 "base_bdevs_list": [ 00:09:46.608 { 00:09:46.608 "name": "BaseBdev1", 00:09:46.608 "uuid": "84dee594-0a92-41c0-80ef-6d7ca502efe5", 00:09:46.608 "is_configured": true, 00:09:46.608 "data_offset": 0, 00:09:46.608 "data_size": 65536 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "name": "BaseBdev2", 00:09:46.608 "uuid": "f2c76508-04da-412d-9ff7-256b3102b230", 00:09:46.608 "is_configured": true, 00:09:46.608 "data_offset": 0, 00:09:46.608 "data_size": 65536 00:09:46.608 }, 00:09:46.608 { 00:09:46.608 "name": "BaseBdev3", 00:09:46.608 "uuid": "f54da639-ce78-4d7e-9864-7fd900476a92", 00:09:46.608 "is_configured": true, 00:09:46.608 "data_offset": 0, 00:09:46.608 "data_size": 65536 00:09:46.608 } 00:09:46.608 ] 00:09:46.608 } 00:09:46.608 } 00:09:46.608 }' 00:09:46.608 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.869 BaseBdev2 00:09:46.869 BaseBdev3' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.869 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.869 [2024-11-04 11:42:12.341803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.869 [2024-11-04 11:42:12.341840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.869 [2024-11-04 11:42:12.341900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.130 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.130 "name": "Existed_Raid", 00:09:47.130 "uuid": "d1e1b2b9-c796-4adf-8201-bd0519f141c5", 00:09:47.130 "strip_size_kb": 64, 00:09:47.130 "state": "offline", 00:09:47.130 "raid_level": "concat", 00:09:47.130 "superblock": false, 00:09:47.130 "num_base_bdevs": 3, 00:09:47.130 "num_base_bdevs_discovered": 2, 00:09:47.130 "num_base_bdevs_operational": 2, 00:09:47.130 "base_bdevs_list": [ 00:09:47.130 { 00:09:47.130 "name": null, 00:09:47.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.130 "is_configured": false, 00:09:47.130 "data_offset": 0, 00:09:47.130 "data_size": 65536 00:09:47.130 }, 00:09:47.130 { 00:09:47.130 "name": "BaseBdev2", 00:09:47.130 "uuid": "f2c76508-04da-412d-9ff7-256b3102b230", 00:09:47.130 "is_configured": true, 00:09:47.130 "data_offset": 0, 00:09:47.130 "data_size": 65536 00:09:47.131 }, 00:09:47.131 { 00:09:47.131 "name": "BaseBdev3", 00:09:47.131 "uuid": "f54da639-ce78-4d7e-9864-7fd900476a92", 00:09:47.131 "is_configured": true, 00:09:47.131 "data_offset": 0, 00:09:47.131 "data_size": 65536 00:09:47.131 } 00:09:47.131 ] 00:09:47.131 }' 00:09:47.131 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.131 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.700 11:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.700 [2024-11-04 11:42:12.975345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.700 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.701 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.701 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.701 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.701 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.701 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.701 [2024-11-04 11:42:13.135485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.701 [2024-11-04 11:42:13.135622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 BaseBdev2 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 [ 00:09:47.961 { 00:09:47.961 "name": "BaseBdev2", 00:09:47.961 "aliases": [ 00:09:47.961 "59c503ae-fa50-4a8d-9ed9-33575775ce8a" 00:09:47.961 ], 00:09:47.961 "product_name": "Malloc disk", 00:09:47.961 "block_size": 512, 00:09:47.961 "num_blocks": 65536, 00:09:47.961 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:47.961 "assigned_rate_limits": { 00:09:47.961 "rw_ios_per_sec": 0, 00:09:47.961 "rw_mbytes_per_sec": 0, 00:09:47.961 "r_mbytes_per_sec": 0, 00:09:47.961 "w_mbytes_per_sec": 0 00:09:47.961 }, 00:09:47.961 "claimed": false, 00:09:47.961 "zoned": false, 00:09:47.961 "supported_io_types": { 00:09:47.961 "read": true, 00:09:47.961 "write": true, 00:09:47.961 "unmap": true, 00:09:47.961 "flush": true, 00:09:47.961 "reset": true, 00:09:47.961 "nvme_admin": false, 00:09:47.961 "nvme_io": false, 00:09:47.961 "nvme_io_md": false, 00:09:47.961 "write_zeroes": true, 00:09:47.961 "zcopy": true, 00:09:47.961 "get_zone_info": false, 00:09:47.961 "zone_management": false, 00:09:47.961 "zone_append": false, 00:09:47.961 "compare": false, 00:09:47.961 "compare_and_write": false, 00:09:47.961 "abort": true, 00:09:47.961 "seek_hole": false, 00:09:47.961 "seek_data": false, 00:09:47.961 "copy": true, 00:09:47.961 "nvme_iov_md": false 00:09:47.961 }, 00:09:47.961 "memory_domains": [ 00:09:47.961 { 00:09:47.961 "dma_device_id": "system", 00:09:47.961 "dma_device_type": 1 00:09:47.961 }, 00:09:47.961 { 00:09:47.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.961 "dma_device_type": 2 00:09:47.961 } 00:09:47.961 ], 00:09:47.961 "driver_specific": {} 00:09:47.961 } 00:09:47.961 ] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 BaseBdev3 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.961 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.961 [ 00:09:47.961 { 00:09:47.961 "name": "BaseBdev3", 00:09:47.961 "aliases": [ 00:09:47.961 "ccdd2c8e-58c0-4ff8-93c8-b91221404303" 00:09:47.961 ], 00:09:47.961 "product_name": "Malloc disk", 00:09:47.961 "block_size": 512, 00:09:47.961 "num_blocks": 65536, 00:09:47.961 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:47.961 "assigned_rate_limits": { 00:09:47.961 "rw_ios_per_sec": 0, 00:09:47.961 "rw_mbytes_per_sec": 0, 00:09:47.961 "r_mbytes_per_sec": 0, 00:09:47.961 "w_mbytes_per_sec": 0 00:09:47.961 }, 00:09:47.961 "claimed": false, 00:09:47.961 "zoned": false, 00:09:47.961 "supported_io_types": { 00:09:47.961 "read": true, 00:09:47.961 "write": true, 00:09:47.961 "unmap": true, 00:09:47.961 "flush": true, 00:09:47.961 "reset": true, 00:09:47.961 "nvme_admin": false, 00:09:47.961 "nvme_io": false, 00:09:47.961 "nvme_io_md": false, 00:09:47.961 "write_zeroes": true, 00:09:47.961 "zcopy": true, 00:09:47.961 "get_zone_info": false, 00:09:47.961 "zone_management": false, 00:09:47.961 "zone_append": false, 00:09:47.961 "compare": false, 00:09:47.961 "compare_and_write": false, 00:09:47.961 "abort": true, 00:09:47.961 "seek_hole": false, 00:09:47.961 "seek_data": false, 00:09:47.961 "copy": true, 00:09:47.961 "nvme_iov_md": false 00:09:47.961 }, 00:09:47.961 "memory_domains": [ 00:09:47.961 { 00:09:47.961 "dma_device_id": "system", 00:09:47.961 "dma_device_type": 1 00:09:47.961 }, 00:09:47.961 { 00:09:47.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.961 "dma_device_type": 2 00:09:47.961 } 00:09:47.961 ], 00:09:47.961 "driver_specific": {} 00:09:47.961 } 00:09:47.962 ] 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.962 [2024-11-04 11:42:13.462167] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.962 [2024-11-04 11:42:13.462286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.962 [2024-11-04 11:42:13.462335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.962 [2024-11-04 11:42:13.464445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.962 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.222 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.222 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.222 "name": "Existed_Raid", 00:09:48.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.222 "strip_size_kb": 64, 00:09:48.222 "state": "configuring", 00:09:48.222 "raid_level": "concat", 00:09:48.222 "superblock": false, 00:09:48.222 "num_base_bdevs": 3, 00:09:48.222 "num_base_bdevs_discovered": 2, 00:09:48.222 "num_base_bdevs_operational": 3, 00:09:48.222 "base_bdevs_list": [ 00:09:48.222 { 00:09:48.222 "name": "BaseBdev1", 00:09:48.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.222 "is_configured": false, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 0 00:09:48.222 }, 00:09:48.222 { 00:09:48.222 "name": "BaseBdev2", 00:09:48.222 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:48.222 "is_configured": true, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 }, 00:09:48.222 { 00:09:48.222 "name": "BaseBdev3", 00:09:48.222 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:48.222 "is_configured": true, 00:09:48.222 "data_offset": 0, 00:09:48.222 "data_size": 65536 00:09:48.222 } 00:09:48.222 ] 00:09:48.222 }' 00:09:48.222 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.222 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 [2024-11-04 11:42:13.961324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.481 11:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.740 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.740 "name": "Existed_Raid", 00:09:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.740 "strip_size_kb": 64, 00:09:48.740 "state": "configuring", 00:09:48.740 "raid_level": "concat", 00:09:48.740 "superblock": false, 00:09:48.740 "num_base_bdevs": 3, 00:09:48.740 "num_base_bdevs_discovered": 1, 00:09:48.740 "num_base_bdevs_operational": 3, 00:09:48.740 "base_bdevs_list": [ 00:09:48.740 { 00:09:48.740 "name": "BaseBdev1", 00:09:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.740 "is_configured": false, 00:09:48.740 "data_offset": 0, 00:09:48.740 "data_size": 0 00:09:48.740 }, 00:09:48.740 { 00:09:48.740 "name": null, 00:09:48.740 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:48.740 "is_configured": false, 00:09:48.740 "data_offset": 0, 00:09:48.740 "data_size": 65536 00:09:48.740 }, 00:09:48.740 { 00:09:48.740 "name": "BaseBdev3", 00:09:48.740 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:48.740 "is_configured": true, 00:09:48.740 "data_offset": 0, 00:09:48.740 "data_size": 65536 00:09:48.740 } 00:09:48.740 ] 00:09:48.740 }' 00:09:48.740 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.740 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.999 [2024-11-04 11:42:14.500792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.999 BaseBdev1 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.999 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 [ 00:09:49.258 { 00:09:49.258 "name": "BaseBdev1", 00:09:49.258 "aliases": [ 00:09:49.258 "0d85a2bc-c99f-4f5d-8338-403d3bbee48b" 00:09:49.258 ], 00:09:49.258 "product_name": "Malloc disk", 00:09:49.258 "block_size": 512, 00:09:49.258 "num_blocks": 65536, 00:09:49.258 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:49.258 "assigned_rate_limits": { 00:09:49.258 "rw_ios_per_sec": 0, 00:09:49.258 "rw_mbytes_per_sec": 0, 00:09:49.258 "r_mbytes_per_sec": 0, 00:09:49.258 "w_mbytes_per_sec": 0 00:09:49.258 }, 00:09:49.258 "claimed": true, 00:09:49.258 "claim_type": "exclusive_write", 00:09:49.258 "zoned": false, 00:09:49.258 "supported_io_types": { 00:09:49.258 "read": true, 00:09:49.258 "write": true, 00:09:49.258 "unmap": true, 00:09:49.258 "flush": true, 00:09:49.258 "reset": true, 00:09:49.258 "nvme_admin": false, 00:09:49.258 "nvme_io": false, 00:09:49.258 "nvme_io_md": false, 00:09:49.258 "write_zeroes": true, 00:09:49.258 "zcopy": true, 00:09:49.258 "get_zone_info": false, 00:09:49.258 "zone_management": false, 00:09:49.258 "zone_append": false, 00:09:49.258 "compare": false, 00:09:49.258 "compare_and_write": false, 00:09:49.258 "abort": true, 00:09:49.258 "seek_hole": false, 00:09:49.258 "seek_data": false, 00:09:49.258 "copy": true, 00:09:49.258 "nvme_iov_md": false 00:09:49.258 }, 00:09:49.258 "memory_domains": [ 00:09:49.258 { 00:09:49.258 "dma_device_id": "system", 00:09:49.258 "dma_device_type": 1 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.258 "dma_device_type": 2 00:09:49.258 } 00:09:49.258 ], 00:09:49.258 "driver_specific": {} 00:09:49.258 } 00:09:49.258 ] 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.258 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.258 "name": "Existed_Raid", 00:09:49.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.258 "strip_size_kb": 64, 00:09:49.258 "state": "configuring", 00:09:49.258 "raid_level": "concat", 00:09:49.258 "superblock": false, 00:09:49.258 "num_base_bdevs": 3, 00:09:49.258 "num_base_bdevs_discovered": 2, 00:09:49.258 "num_base_bdevs_operational": 3, 00:09:49.258 "base_bdevs_list": [ 00:09:49.258 { 00:09:49.258 "name": "BaseBdev1", 00:09:49.258 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:49.258 "is_configured": true, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.258 }, 00:09:49.258 { 00:09:49.258 "name": null, 00:09:49.258 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:49.258 "is_configured": false, 00:09:49.258 "data_offset": 0, 00:09:49.258 "data_size": 65536 00:09:49.259 }, 00:09:49.259 { 00:09:49.259 "name": "BaseBdev3", 00:09:49.259 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:49.259 "is_configured": true, 00:09:49.259 "data_offset": 0, 00:09:49.259 "data_size": 65536 00:09:49.259 } 00:09:49.259 ] 00:09:49.259 }' 00:09:49.259 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.259 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.518 [2024-11-04 11:42:14.976188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.518 11:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.518 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.518 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.518 "name": "Existed_Raid", 00:09:49.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.518 "strip_size_kb": 64, 00:09:49.518 "state": "configuring", 00:09:49.518 "raid_level": "concat", 00:09:49.518 "superblock": false, 00:09:49.518 "num_base_bdevs": 3, 00:09:49.518 "num_base_bdevs_discovered": 1, 00:09:49.518 "num_base_bdevs_operational": 3, 00:09:49.518 "base_bdevs_list": [ 00:09:49.518 { 00:09:49.518 "name": "BaseBdev1", 00:09:49.518 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:49.518 "is_configured": true, 00:09:49.518 "data_offset": 0, 00:09:49.518 "data_size": 65536 00:09:49.518 }, 00:09:49.518 { 00:09:49.518 "name": null, 00:09:49.518 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:49.518 "is_configured": false, 00:09:49.518 "data_offset": 0, 00:09:49.518 "data_size": 65536 00:09:49.518 }, 00:09:49.518 { 00:09:49.518 "name": null, 00:09:49.518 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:49.518 "is_configured": false, 00:09:49.518 "data_offset": 0, 00:09:49.518 "data_size": 65536 00:09:49.518 } 00:09:49.518 ] 00:09:49.518 }' 00:09:49.518 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.518 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.088 [2024-11-04 11:42:15.483412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.088 "name": "Existed_Raid", 00:09:50.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.088 "strip_size_kb": 64, 00:09:50.088 "state": "configuring", 00:09:50.088 "raid_level": "concat", 00:09:50.088 "superblock": false, 00:09:50.088 "num_base_bdevs": 3, 00:09:50.088 "num_base_bdevs_discovered": 2, 00:09:50.088 "num_base_bdevs_operational": 3, 00:09:50.088 "base_bdevs_list": [ 00:09:50.088 { 00:09:50.088 "name": "BaseBdev1", 00:09:50.088 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:50.088 "is_configured": true, 00:09:50.088 "data_offset": 0, 00:09:50.088 "data_size": 65536 00:09:50.088 }, 00:09:50.088 { 00:09:50.088 "name": null, 00:09:50.088 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:50.088 "is_configured": false, 00:09:50.088 "data_offset": 0, 00:09:50.088 "data_size": 65536 00:09:50.088 }, 00:09:50.088 { 00:09:50.088 "name": "BaseBdev3", 00:09:50.088 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:50.088 "is_configured": true, 00:09:50.088 "data_offset": 0, 00:09:50.088 "data_size": 65536 00:09:50.088 } 00:09:50.088 ] 00:09:50.088 }' 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.088 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.673 11:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.673 [2024-11-04 11:42:15.970576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.674 "name": "Existed_Raid", 00:09:50.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.674 "strip_size_kb": 64, 00:09:50.674 "state": "configuring", 00:09:50.674 "raid_level": "concat", 00:09:50.674 "superblock": false, 00:09:50.674 "num_base_bdevs": 3, 00:09:50.674 "num_base_bdevs_discovered": 1, 00:09:50.674 "num_base_bdevs_operational": 3, 00:09:50.674 "base_bdevs_list": [ 00:09:50.674 { 00:09:50.674 "name": null, 00:09:50.674 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:50.674 "is_configured": false, 00:09:50.674 "data_offset": 0, 00:09:50.674 "data_size": 65536 00:09:50.674 }, 00:09:50.674 { 00:09:50.674 "name": null, 00:09:50.674 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:50.674 "is_configured": false, 00:09:50.674 "data_offset": 0, 00:09:50.674 "data_size": 65536 00:09:50.674 }, 00:09:50.674 { 00:09:50.674 "name": "BaseBdev3", 00:09:50.674 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:50.674 "is_configured": true, 00:09:50.674 "data_offset": 0, 00:09:50.674 "data_size": 65536 00:09:50.674 } 00:09:50.674 ] 00:09:50.674 }' 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.674 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.242 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.243 [2024-11-04 11:42:16.561146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.243 "name": "Existed_Raid", 00:09:51.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.243 "strip_size_kb": 64, 00:09:51.243 "state": "configuring", 00:09:51.243 "raid_level": "concat", 00:09:51.243 "superblock": false, 00:09:51.243 "num_base_bdevs": 3, 00:09:51.243 "num_base_bdevs_discovered": 2, 00:09:51.243 "num_base_bdevs_operational": 3, 00:09:51.243 "base_bdevs_list": [ 00:09:51.243 { 00:09:51.243 "name": null, 00:09:51.243 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:51.243 "is_configured": false, 00:09:51.243 "data_offset": 0, 00:09:51.243 "data_size": 65536 00:09:51.243 }, 00:09:51.243 { 00:09:51.243 "name": "BaseBdev2", 00:09:51.243 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:51.243 "is_configured": true, 00:09:51.243 "data_offset": 0, 00:09:51.243 "data_size": 65536 00:09:51.243 }, 00:09:51.243 { 00:09:51.243 "name": "BaseBdev3", 00:09:51.243 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:51.243 "is_configured": true, 00:09:51.243 "data_offset": 0, 00:09:51.243 "data_size": 65536 00:09:51.243 } 00:09:51.243 ] 00:09:51.243 }' 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.243 11:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.503 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.503 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.503 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.503 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d85a2bc-c99f-4f5d-8338-403d3bbee48b 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.764 [2024-11-04 11:42:17.153672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:51.764 [2024-11-04 11:42:17.153801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.764 [2024-11-04 11:42:17.153815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:51.764 [2024-11-04 11:42:17.154084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.764 [2024-11-04 11:42:17.154234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.764 [2024-11-04 11:42:17.154243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:51.764 [2024-11-04 11:42:17.154504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.764 NewBaseBdev 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.764 [ 00:09:51.764 { 00:09:51.764 "name": "NewBaseBdev", 00:09:51.764 "aliases": [ 00:09:51.764 "0d85a2bc-c99f-4f5d-8338-403d3bbee48b" 00:09:51.764 ], 00:09:51.764 "product_name": "Malloc disk", 00:09:51.764 "block_size": 512, 00:09:51.764 "num_blocks": 65536, 00:09:51.764 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:51.764 "assigned_rate_limits": { 00:09:51.764 "rw_ios_per_sec": 0, 00:09:51.764 "rw_mbytes_per_sec": 0, 00:09:51.764 "r_mbytes_per_sec": 0, 00:09:51.764 "w_mbytes_per_sec": 0 00:09:51.764 }, 00:09:51.764 "claimed": true, 00:09:51.764 "claim_type": "exclusive_write", 00:09:51.764 "zoned": false, 00:09:51.764 "supported_io_types": { 00:09:51.764 "read": true, 00:09:51.764 "write": true, 00:09:51.764 "unmap": true, 00:09:51.764 "flush": true, 00:09:51.764 "reset": true, 00:09:51.764 "nvme_admin": false, 00:09:51.764 "nvme_io": false, 00:09:51.764 "nvme_io_md": false, 00:09:51.764 "write_zeroes": true, 00:09:51.764 "zcopy": true, 00:09:51.764 "get_zone_info": false, 00:09:51.764 "zone_management": false, 00:09:51.764 "zone_append": false, 00:09:51.764 "compare": false, 00:09:51.764 "compare_and_write": false, 00:09:51.764 "abort": true, 00:09:51.764 "seek_hole": false, 00:09:51.764 "seek_data": false, 00:09:51.764 "copy": true, 00:09:51.764 "nvme_iov_md": false 00:09:51.764 }, 00:09:51.764 "memory_domains": [ 00:09:51.764 { 00:09:51.764 "dma_device_id": "system", 00:09:51.764 "dma_device_type": 1 00:09:51.764 }, 00:09:51.764 { 00:09:51.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.764 "dma_device_type": 2 00:09:51.764 } 00:09:51.764 ], 00:09:51.764 "driver_specific": {} 00:09:51.764 } 00:09:51.764 ] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.764 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.764 "name": "Existed_Raid", 00:09:51.764 "uuid": "3ee13dbf-986a-42d9-82b0-4917ee864164", 00:09:51.764 "strip_size_kb": 64, 00:09:51.764 "state": "online", 00:09:51.764 "raid_level": "concat", 00:09:51.764 "superblock": false, 00:09:51.764 "num_base_bdevs": 3, 00:09:51.764 "num_base_bdevs_discovered": 3, 00:09:51.764 "num_base_bdevs_operational": 3, 00:09:51.764 "base_bdevs_list": [ 00:09:51.764 { 00:09:51.764 "name": "NewBaseBdev", 00:09:51.764 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:51.764 "is_configured": true, 00:09:51.764 "data_offset": 0, 00:09:51.764 "data_size": 65536 00:09:51.764 }, 00:09:51.764 { 00:09:51.764 "name": "BaseBdev2", 00:09:51.764 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:51.764 "is_configured": true, 00:09:51.764 "data_offset": 0, 00:09:51.764 "data_size": 65536 00:09:51.764 }, 00:09:51.764 { 00:09:51.764 "name": "BaseBdev3", 00:09:51.764 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:51.764 "is_configured": true, 00:09:51.764 "data_offset": 0, 00:09:51.764 "data_size": 65536 00:09:51.764 } 00:09:51.764 ] 00:09:51.765 }' 00:09:51.765 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.765 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.334 [2024-11-04 11:42:17.673188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.334 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.334 "name": "Existed_Raid", 00:09:52.334 "aliases": [ 00:09:52.334 "3ee13dbf-986a-42d9-82b0-4917ee864164" 00:09:52.334 ], 00:09:52.334 "product_name": "Raid Volume", 00:09:52.334 "block_size": 512, 00:09:52.334 "num_blocks": 196608, 00:09:52.334 "uuid": "3ee13dbf-986a-42d9-82b0-4917ee864164", 00:09:52.334 "assigned_rate_limits": { 00:09:52.334 "rw_ios_per_sec": 0, 00:09:52.334 "rw_mbytes_per_sec": 0, 00:09:52.334 "r_mbytes_per_sec": 0, 00:09:52.334 "w_mbytes_per_sec": 0 00:09:52.334 }, 00:09:52.334 "claimed": false, 00:09:52.334 "zoned": false, 00:09:52.334 "supported_io_types": { 00:09:52.334 "read": true, 00:09:52.334 "write": true, 00:09:52.334 "unmap": true, 00:09:52.334 "flush": true, 00:09:52.334 "reset": true, 00:09:52.334 "nvme_admin": false, 00:09:52.334 "nvme_io": false, 00:09:52.334 "nvme_io_md": false, 00:09:52.334 "write_zeroes": true, 00:09:52.334 "zcopy": false, 00:09:52.334 "get_zone_info": false, 00:09:52.334 "zone_management": false, 00:09:52.334 "zone_append": false, 00:09:52.334 "compare": false, 00:09:52.334 "compare_and_write": false, 00:09:52.334 "abort": false, 00:09:52.334 "seek_hole": false, 00:09:52.334 "seek_data": false, 00:09:52.334 "copy": false, 00:09:52.334 "nvme_iov_md": false 00:09:52.334 }, 00:09:52.334 "memory_domains": [ 00:09:52.334 { 00:09:52.334 "dma_device_id": "system", 00:09:52.334 "dma_device_type": 1 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.334 "dma_device_type": 2 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "dma_device_id": "system", 00:09:52.334 "dma_device_type": 1 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.334 "dma_device_type": 2 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "dma_device_id": "system", 00:09:52.334 "dma_device_type": 1 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.334 "dma_device_type": 2 00:09:52.334 } 00:09:52.334 ], 00:09:52.334 "driver_specific": { 00:09:52.334 "raid": { 00:09:52.334 "uuid": "3ee13dbf-986a-42d9-82b0-4917ee864164", 00:09:52.334 "strip_size_kb": 64, 00:09:52.334 "state": "online", 00:09:52.334 "raid_level": "concat", 00:09:52.334 "superblock": false, 00:09:52.334 "num_base_bdevs": 3, 00:09:52.334 "num_base_bdevs_discovered": 3, 00:09:52.334 "num_base_bdevs_operational": 3, 00:09:52.334 "base_bdevs_list": [ 00:09:52.334 { 00:09:52.334 "name": "NewBaseBdev", 00:09:52.334 "uuid": "0d85a2bc-c99f-4f5d-8338-403d3bbee48b", 00:09:52.334 "is_configured": true, 00:09:52.334 "data_offset": 0, 00:09:52.334 "data_size": 65536 00:09:52.334 }, 00:09:52.334 { 00:09:52.334 "name": "BaseBdev2", 00:09:52.334 "uuid": "59c503ae-fa50-4a8d-9ed9-33575775ce8a", 00:09:52.334 "is_configured": true, 00:09:52.334 "data_offset": 0, 00:09:52.335 "data_size": 65536 00:09:52.335 }, 00:09:52.335 { 00:09:52.335 "name": "BaseBdev3", 00:09:52.335 "uuid": "ccdd2c8e-58c0-4ff8-93c8-b91221404303", 00:09:52.335 "is_configured": true, 00:09:52.335 "data_offset": 0, 00:09:52.335 "data_size": 65536 00:09:52.335 } 00:09:52.335 ] 00:09:52.335 } 00:09:52.335 } 00:09:52.335 }' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:52.335 BaseBdev2 00:09:52.335 BaseBdev3' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.335 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.594 [2024-11-04 11:42:17.952447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.594 [2024-11-04 11:42:17.952489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.594 [2024-11-04 11:42:17.952591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.594 [2024-11-04 11:42:17.952647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.594 [2024-11-04 11:42:17.952659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65826 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65826 ']' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65826 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65826 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:52.594 killing process with pid 65826 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65826' 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65826 00:09:52.594 [2024-11-04 11:42:17.992252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.594 11:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65826 00:09:52.854 [2024-11-04 11:42:18.303293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:54.237 ************************************ 00:09:54.237 END TEST raid_state_function_test 00:09:54.237 ************************************ 00:09:54.237 00:09:54.237 real 0m10.916s 00:09:54.237 user 0m17.214s 00:09:54.237 sys 0m1.979s 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 11:42:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:54.237 11:42:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:54.237 11:42:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.237 11:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 ************************************ 00:09:54.237 START TEST raid_state_function_test_sb 00:09:54.237 ************************************ 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66452 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66452' 00:09:54.237 Process raid pid: 66452 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66452 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66452 ']' 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:54.237 11:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.237 [2024-11-04 11:42:19.638149] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:09:54.237 [2024-11-04 11:42:19.638274] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.499 [2024-11-04 11:42:19.814569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.499 [2024-11-04 11:42:19.932560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.758 [2024-11-04 11:42:20.139441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.758 [2024-11-04 11:42:20.139489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.018 [2024-11-04 11:42:20.526126] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.018 [2024-11-04 11:42:20.526277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.018 [2024-11-04 11:42:20.526295] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.018 [2024-11-04 11:42:20.526307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.018 [2024-11-04 11:42:20.526315] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.018 [2024-11-04 11:42:20.526340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.018 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.019 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.019 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.019 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.019 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.019 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.278 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.278 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.278 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.278 "name": "Existed_Raid", 00:09:55.278 "uuid": "2d4a2f52-1ce2-4a87-845c-623d06f267cd", 00:09:55.278 "strip_size_kb": 64, 00:09:55.278 "state": "configuring", 00:09:55.278 "raid_level": "concat", 00:09:55.278 "superblock": true, 00:09:55.278 "num_base_bdevs": 3, 00:09:55.278 "num_base_bdevs_discovered": 0, 00:09:55.278 "num_base_bdevs_operational": 3, 00:09:55.278 "base_bdevs_list": [ 00:09:55.278 { 00:09:55.278 "name": "BaseBdev1", 00:09:55.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.278 "is_configured": false, 00:09:55.278 "data_offset": 0, 00:09:55.278 "data_size": 0 00:09:55.278 }, 00:09:55.278 { 00:09:55.278 "name": "BaseBdev2", 00:09:55.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.278 "is_configured": false, 00:09:55.278 "data_offset": 0, 00:09:55.278 "data_size": 0 00:09:55.278 }, 00:09:55.278 { 00:09:55.278 "name": "BaseBdev3", 00:09:55.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.278 "is_configured": false, 00:09:55.278 "data_offset": 0, 00:09:55.278 "data_size": 0 00:09:55.278 } 00:09:55.278 ] 00:09:55.278 }' 00:09:55.279 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.279 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.539 11:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.539 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.539 11:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.539 [2024-11-04 11:42:21.001230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.539 [2024-11-04 11:42:21.001362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.539 [2024-11-04 11:42:21.013211] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.539 [2024-11-04 11:42:21.013333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.539 [2024-11-04 11:42:21.013361] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.539 [2024-11-04 11:42:21.013385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.539 [2024-11-04 11:42:21.013416] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.539 [2024-11-04 11:42:21.013441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.539 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.799 [2024-11-04 11:42:21.063950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.799 BaseBdev1 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.799 [ 00:09:55.799 { 00:09:55.799 "name": "BaseBdev1", 00:09:55.799 "aliases": [ 00:09:55.799 "575bcb07-0a4a-4359-a398-e69c406b1615" 00:09:55.799 ], 00:09:55.799 "product_name": "Malloc disk", 00:09:55.799 "block_size": 512, 00:09:55.799 "num_blocks": 65536, 00:09:55.799 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:55.799 "assigned_rate_limits": { 00:09:55.799 "rw_ios_per_sec": 0, 00:09:55.799 "rw_mbytes_per_sec": 0, 00:09:55.799 "r_mbytes_per_sec": 0, 00:09:55.799 "w_mbytes_per_sec": 0 00:09:55.799 }, 00:09:55.799 "claimed": true, 00:09:55.799 "claim_type": "exclusive_write", 00:09:55.799 "zoned": false, 00:09:55.799 "supported_io_types": { 00:09:55.799 "read": true, 00:09:55.799 "write": true, 00:09:55.799 "unmap": true, 00:09:55.799 "flush": true, 00:09:55.799 "reset": true, 00:09:55.799 "nvme_admin": false, 00:09:55.799 "nvme_io": false, 00:09:55.799 "nvme_io_md": false, 00:09:55.799 "write_zeroes": true, 00:09:55.799 "zcopy": true, 00:09:55.799 "get_zone_info": false, 00:09:55.799 "zone_management": false, 00:09:55.799 "zone_append": false, 00:09:55.799 "compare": false, 00:09:55.799 "compare_and_write": false, 00:09:55.799 "abort": true, 00:09:55.799 "seek_hole": false, 00:09:55.799 "seek_data": false, 00:09:55.799 "copy": true, 00:09:55.799 "nvme_iov_md": false 00:09:55.799 }, 00:09:55.799 "memory_domains": [ 00:09:55.799 { 00:09:55.799 "dma_device_id": "system", 00:09:55.799 "dma_device_type": 1 00:09:55.799 }, 00:09:55.799 { 00:09:55.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.799 "dma_device_type": 2 00:09:55.799 } 00:09:55.799 ], 00:09:55.799 "driver_specific": {} 00:09:55.799 } 00:09:55.799 ] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.799 "name": "Existed_Raid", 00:09:55.799 "uuid": "e4b2228d-7af0-41e4-9388-57ff255941e4", 00:09:55.799 "strip_size_kb": 64, 00:09:55.799 "state": "configuring", 00:09:55.799 "raid_level": "concat", 00:09:55.799 "superblock": true, 00:09:55.799 "num_base_bdevs": 3, 00:09:55.799 "num_base_bdevs_discovered": 1, 00:09:55.799 "num_base_bdevs_operational": 3, 00:09:55.799 "base_bdevs_list": [ 00:09:55.799 { 00:09:55.799 "name": "BaseBdev1", 00:09:55.799 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:55.799 "is_configured": true, 00:09:55.799 "data_offset": 2048, 00:09:55.799 "data_size": 63488 00:09:55.799 }, 00:09:55.799 { 00:09:55.799 "name": "BaseBdev2", 00:09:55.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.799 "is_configured": false, 00:09:55.799 "data_offset": 0, 00:09:55.799 "data_size": 0 00:09:55.799 }, 00:09:55.799 { 00:09:55.799 "name": "BaseBdev3", 00:09:55.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.799 "is_configured": false, 00:09:55.799 "data_offset": 0, 00:09:55.799 "data_size": 0 00:09:55.799 } 00:09:55.799 ] 00:09:55.799 }' 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.799 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.059 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.059 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.059 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.059 [2024-11-04 11:42:21.563192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.059 [2024-11-04 11:42:21.563267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:56.060 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.060 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.060 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.060 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.060 [2024-11-04 11:42:21.575231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.060 [2024-11-04 11:42:21.577128] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.060 [2024-11-04 11:42:21.577224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.060 [2024-11-04 11:42:21.577239] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.060 [2024-11-04 11:42:21.577249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.320 "name": "Existed_Raid", 00:09:56.320 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:56.320 "strip_size_kb": 64, 00:09:56.320 "state": "configuring", 00:09:56.320 "raid_level": "concat", 00:09:56.320 "superblock": true, 00:09:56.320 "num_base_bdevs": 3, 00:09:56.320 "num_base_bdevs_discovered": 1, 00:09:56.320 "num_base_bdevs_operational": 3, 00:09:56.320 "base_bdevs_list": [ 00:09:56.320 { 00:09:56.320 "name": "BaseBdev1", 00:09:56.320 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:56.320 "is_configured": true, 00:09:56.320 "data_offset": 2048, 00:09:56.320 "data_size": 63488 00:09:56.320 }, 00:09:56.320 { 00:09:56.320 "name": "BaseBdev2", 00:09:56.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.320 "is_configured": false, 00:09:56.320 "data_offset": 0, 00:09:56.320 "data_size": 0 00:09:56.320 }, 00:09:56.320 { 00:09:56.320 "name": "BaseBdev3", 00:09:56.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.320 "is_configured": false, 00:09:56.320 "data_offset": 0, 00:09:56.320 "data_size": 0 00:09:56.320 } 00:09:56.320 ] 00:09:56.320 }' 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.320 11:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.580 [2024-11-04 11:42:22.080147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.580 BaseBdev2 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.580 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.840 [ 00:09:56.840 { 00:09:56.840 "name": "BaseBdev2", 00:09:56.840 "aliases": [ 00:09:56.840 "87e80659-6725-444d-94a5-c4feb78bf7c7" 00:09:56.840 ], 00:09:56.840 "product_name": "Malloc disk", 00:09:56.840 "block_size": 512, 00:09:56.840 "num_blocks": 65536, 00:09:56.840 "uuid": "87e80659-6725-444d-94a5-c4feb78bf7c7", 00:09:56.840 "assigned_rate_limits": { 00:09:56.840 "rw_ios_per_sec": 0, 00:09:56.840 "rw_mbytes_per_sec": 0, 00:09:56.840 "r_mbytes_per_sec": 0, 00:09:56.840 "w_mbytes_per_sec": 0 00:09:56.840 }, 00:09:56.840 "claimed": true, 00:09:56.840 "claim_type": "exclusive_write", 00:09:56.840 "zoned": false, 00:09:56.840 "supported_io_types": { 00:09:56.840 "read": true, 00:09:56.840 "write": true, 00:09:56.840 "unmap": true, 00:09:56.840 "flush": true, 00:09:56.840 "reset": true, 00:09:56.840 "nvme_admin": false, 00:09:56.840 "nvme_io": false, 00:09:56.840 "nvme_io_md": false, 00:09:56.840 "write_zeroes": true, 00:09:56.840 "zcopy": true, 00:09:56.840 "get_zone_info": false, 00:09:56.840 "zone_management": false, 00:09:56.840 "zone_append": false, 00:09:56.840 "compare": false, 00:09:56.840 "compare_and_write": false, 00:09:56.840 "abort": true, 00:09:56.840 "seek_hole": false, 00:09:56.840 "seek_data": false, 00:09:56.840 "copy": true, 00:09:56.840 "nvme_iov_md": false 00:09:56.840 }, 00:09:56.840 "memory_domains": [ 00:09:56.840 { 00:09:56.840 "dma_device_id": "system", 00:09:56.840 "dma_device_type": 1 00:09:56.840 }, 00:09:56.840 { 00:09:56.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.840 "dma_device_type": 2 00:09:56.840 } 00:09:56.840 ], 00:09:56.840 "driver_specific": {} 00:09:56.840 } 00:09:56.840 ] 00:09:56.840 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.840 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:56.840 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.841 "name": "Existed_Raid", 00:09:56.841 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:56.841 "strip_size_kb": 64, 00:09:56.841 "state": "configuring", 00:09:56.841 "raid_level": "concat", 00:09:56.841 "superblock": true, 00:09:56.841 "num_base_bdevs": 3, 00:09:56.841 "num_base_bdevs_discovered": 2, 00:09:56.841 "num_base_bdevs_operational": 3, 00:09:56.841 "base_bdevs_list": [ 00:09:56.841 { 00:09:56.841 "name": "BaseBdev1", 00:09:56.841 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:56.841 "is_configured": true, 00:09:56.841 "data_offset": 2048, 00:09:56.841 "data_size": 63488 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "name": "BaseBdev2", 00:09:56.841 "uuid": "87e80659-6725-444d-94a5-c4feb78bf7c7", 00:09:56.841 "is_configured": true, 00:09:56.841 "data_offset": 2048, 00:09:56.841 "data_size": 63488 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "name": "BaseBdev3", 00:09:56.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.841 "is_configured": false, 00:09:56.841 "data_offset": 0, 00:09:56.841 "data_size": 0 00:09:56.841 } 00:09:56.841 ] 00:09:56.841 }' 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.841 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 [2024-11-04 11:42:22.615634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.101 [2024-11-04 11:42:22.616037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.101 [2024-11-04 11:42:22.616112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:57.101 [2024-11-04 11:42:22.616508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:57.101 [2024-11-04 11:42:22.616757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.101 [2024-11-04 11:42:22.616810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:57.101 BaseBdev3 00:09:57.101 [2024-11-04 11:42:22.617059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.101 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.361 [ 00:09:57.361 { 00:09:57.361 "name": "BaseBdev3", 00:09:57.361 "aliases": [ 00:09:57.361 "bd970828-ae36-46de-bde1-b805c14e0f99" 00:09:57.361 ], 00:09:57.361 "product_name": "Malloc disk", 00:09:57.361 "block_size": 512, 00:09:57.361 "num_blocks": 65536, 00:09:57.361 "uuid": "bd970828-ae36-46de-bde1-b805c14e0f99", 00:09:57.361 "assigned_rate_limits": { 00:09:57.361 "rw_ios_per_sec": 0, 00:09:57.361 "rw_mbytes_per_sec": 0, 00:09:57.361 "r_mbytes_per_sec": 0, 00:09:57.361 "w_mbytes_per_sec": 0 00:09:57.361 }, 00:09:57.361 "claimed": true, 00:09:57.361 "claim_type": "exclusive_write", 00:09:57.361 "zoned": false, 00:09:57.361 "supported_io_types": { 00:09:57.361 "read": true, 00:09:57.361 "write": true, 00:09:57.361 "unmap": true, 00:09:57.361 "flush": true, 00:09:57.361 "reset": true, 00:09:57.361 "nvme_admin": false, 00:09:57.361 "nvme_io": false, 00:09:57.361 "nvme_io_md": false, 00:09:57.361 "write_zeroes": true, 00:09:57.361 "zcopy": true, 00:09:57.361 "get_zone_info": false, 00:09:57.361 "zone_management": false, 00:09:57.361 "zone_append": false, 00:09:57.361 "compare": false, 00:09:57.361 "compare_and_write": false, 00:09:57.361 "abort": true, 00:09:57.361 "seek_hole": false, 00:09:57.361 "seek_data": false, 00:09:57.361 "copy": true, 00:09:57.361 "nvme_iov_md": false 00:09:57.361 }, 00:09:57.361 "memory_domains": [ 00:09:57.361 { 00:09:57.361 "dma_device_id": "system", 00:09:57.361 "dma_device_type": 1 00:09:57.361 }, 00:09:57.361 { 00:09:57.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.361 "dma_device_type": 2 00:09:57.361 } 00:09:57.361 ], 00:09:57.361 "driver_specific": {} 00:09:57.361 } 00:09:57.361 ] 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.361 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.362 "name": "Existed_Raid", 00:09:57.362 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:57.362 "strip_size_kb": 64, 00:09:57.362 "state": "online", 00:09:57.362 "raid_level": "concat", 00:09:57.362 "superblock": true, 00:09:57.362 "num_base_bdevs": 3, 00:09:57.362 "num_base_bdevs_discovered": 3, 00:09:57.362 "num_base_bdevs_operational": 3, 00:09:57.362 "base_bdevs_list": [ 00:09:57.362 { 00:09:57.362 "name": "BaseBdev1", 00:09:57.362 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:57.362 "is_configured": true, 00:09:57.362 "data_offset": 2048, 00:09:57.362 "data_size": 63488 00:09:57.362 }, 00:09:57.362 { 00:09:57.362 "name": "BaseBdev2", 00:09:57.362 "uuid": "87e80659-6725-444d-94a5-c4feb78bf7c7", 00:09:57.362 "is_configured": true, 00:09:57.362 "data_offset": 2048, 00:09:57.362 "data_size": 63488 00:09:57.362 }, 00:09:57.362 { 00:09:57.362 "name": "BaseBdev3", 00:09:57.362 "uuid": "bd970828-ae36-46de-bde1-b805c14e0f99", 00:09:57.362 "is_configured": true, 00:09:57.362 "data_offset": 2048, 00:09:57.362 "data_size": 63488 00:09:57.362 } 00:09:57.362 ] 00:09:57.362 }' 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.362 11:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.621 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.621 [2024-11-04 11:42:23.135152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.881 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.881 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.881 "name": "Existed_Raid", 00:09:57.881 "aliases": [ 00:09:57.881 "348c24c3-d875-4858-997c-6a9233648105" 00:09:57.881 ], 00:09:57.881 "product_name": "Raid Volume", 00:09:57.881 "block_size": 512, 00:09:57.881 "num_blocks": 190464, 00:09:57.881 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:57.881 "assigned_rate_limits": { 00:09:57.881 "rw_ios_per_sec": 0, 00:09:57.881 "rw_mbytes_per_sec": 0, 00:09:57.881 "r_mbytes_per_sec": 0, 00:09:57.881 "w_mbytes_per_sec": 0 00:09:57.881 }, 00:09:57.881 "claimed": false, 00:09:57.881 "zoned": false, 00:09:57.881 "supported_io_types": { 00:09:57.881 "read": true, 00:09:57.881 "write": true, 00:09:57.881 "unmap": true, 00:09:57.881 "flush": true, 00:09:57.881 "reset": true, 00:09:57.881 "nvme_admin": false, 00:09:57.881 "nvme_io": false, 00:09:57.881 "nvme_io_md": false, 00:09:57.881 "write_zeroes": true, 00:09:57.881 "zcopy": false, 00:09:57.881 "get_zone_info": false, 00:09:57.881 "zone_management": false, 00:09:57.881 "zone_append": false, 00:09:57.881 "compare": false, 00:09:57.881 "compare_and_write": false, 00:09:57.881 "abort": false, 00:09:57.881 "seek_hole": false, 00:09:57.881 "seek_data": false, 00:09:57.881 "copy": false, 00:09:57.881 "nvme_iov_md": false 00:09:57.881 }, 00:09:57.881 "memory_domains": [ 00:09:57.881 { 00:09:57.881 "dma_device_id": "system", 00:09:57.881 "dma_device_type": 1 00:09:57.881 }, 00:09:57.881 { 00:09:57.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.881 "dma_device_type": 2 00:09:57.881 }, 00:09:57.881 { 00:09:57.881 "dma_device_id": "system", 00:09:57.881 "dma_device_type": 1 00:09:57.881 }, 00:09:57.881 { 00:09:57.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.881 "dma_device_type": 2 00:09:57.881 }, 00:09:57.881 { 00:09:57.881 "dma_device_id": "system", 00:09:57.881 "dma_device_type": 1 00:09:57.881 }, 00:09:57.881 { 00:09:57.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.881 "dma_device_type": 2 00:09:57.881 } 00:09:57.881 ], 00:09:57.881 "driver_specific": { 00:09:57.881 "raid": { 00:09:57.881 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:57.881 "strip_size_kb": 64, 00:09:57.881 "state": "online", 00:09:57.881 "raid_level": "concat", 00:09:57.881 "superblock": true, 00:09:57.881 "num_base_bdevs": 3, 00:09:57.881 "num_base_bdevs_discovered": 3, 00:09:57.881 "num_base_bdevs_operational": 3, 00:09:57.881 "base_bdevs_list": [ 00:09:57.881 { 00:09:57.881 "name": "BaseBdev1", 00:09:57.881 "uuid": "575bcb07-0a4a-4359-a398-e69c406b1615", 00:09:57.881 "is_configured": true, 00:09:57.881 "data_offset": 2048, 00:09:57.882 "data_size": 63488 00:09:57.882 }, 00:09:57.882 { 00:09:57.882 "name": "BaseBdev2", 00:09:57.882 "uuid": "87e80659-6725-444d-94a5-c4feb78bf7c7", 00:09:57.882 "is_configured": true, 00:09:57.882 "data_offset": 2048, 00:09:57.882 "data_size": 63488 00:09:57.882 }, 00:09:57.882 { 00:09:57.882 "name": "BaseBdev3", 00:09:57.882 "uuid": "bd970828-ae36-46de-bde1-b805c14e0f99", 00:09:57.882 "is_configured": true, 00:09:57.882 "data_offset": 2048, 00:09:57.882 "data_size": 63488 00:09:57.882 } 00:09:57.882 ] 00:09:57.882 } 00:09:57.882 } 00:09:57.882 }' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.882 BaseBdev2 00:09:57.882 BaseBdev3' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.882 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.142 [2024-11-04 11:42:23.430471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.142 [2024-11-04 11:42:23.430508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.142 [2024-11-04 11:42:23.430565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.142 "name": "Existed_Raid", 00:09:58.142 "uuid": "348c24c3-d875-4858-997c-6a9233648105", 00:09:58.142 "strip_size_kb": 64, 00:09:58.142 "state": "offline", 00:09:58.142 "raid_level": "concat", 00:09:58.142 "superblock": true, 00:09:58.142 "num_base_bdevs": 3, 00:09:58.142 "num_base_bdevs_discovered": 2, 00:09:58.142 "num_base_bdevs_operational": 2, 00:09:58.142 "base_bdevs_list": [ 00:09:58.142 { 00:09:58.142 "name": null, 00:09:58.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.142 "is_configured": false, 00:09:58.142 "data_offset": 0, 00:09:58.142 "data_size": 63488 00:09:58.142 }, 00:09:58.142 { 00:09:58.142 "name": "BaseBdev2", 00:09:58.142 "uuid": "87e80659-6725-444d-94a5-c4feb78bf7c7", 00:09:58.142 "is_configured": true, 00:09:58.142 "data_offset": 2048, 00:09:58.142 "data_size": 63488 00:09:58.142 }, 00:09:58.142 { 00:09:58.142 "name": "BaseBdev3", 00:09:58.142 "uuid": "bd970828-ae36-46de-bde1-b805c14e0f99", 00:09:58.142 "is_configured": true, 00:09:58.142 "data_offset": 2048, 00:09:58.142 "data_size": 63488 00:09:58.142 } 00:09:58.142 ] 00:09:58.142 }' 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.142 11:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.711 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.711 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.711 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.711 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 [2024-11-04 11:42:24.085593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.971 [2024-11-04 11:42:24.245876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.971 [2024-11-04 11:42:24.246044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.971 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 BaseBdev2 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.972 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.972 [ 00:09:58.972 { 00:09:58.972 "name": "BaseBdev2", 00:09:58.972 "aliases": [ 00:09:58.972 "a3004a96-8da6-4629-8270-a090ab778efe" 00:09:58.972 ], 00:09:59.232 "product_name": "Malloc disk", 00:09:59.232 "block_size": 512, 00:09:59.232 "num_blocks": 65536, 00:09:59.232 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:09:59.232 "assigned_rate_limits": { 00:09:59.232 "rw_ios_per_sec": 0, 00:09:59.232 "rw_mbytes_per_sec": 0, 00:09:59.232 "r_mbytes_per_sec": 0, 00:09:59.232 "w_mbytes_per_sec": 0 00:09:59.232 }, 00:09:59.232 "claimed": false, 00:09:59.232 "zoned": false, 00:09:59.232 "supported_io_types": { 00:09:59.232 "read": true, 00:09:59.232 "write": true, 00:09:59.232 "unmap": true, 00:09:59.232 "flush": true, 00:09:59.232 "reset": true, 00:09:59.232 "nvme_admin": false, 00:09:59.232 "nvme_io": false, 00:09:59.232 "nvme_io_md": false, 00:09:59.232 "write_zeroes": true, 00:09:59.232 "zcopy": true, 00:09:59.232 "get_zone_info": false, 00:09:59.232 "zone_management": false, 00:09:59.232 "zone_append": false, 00:09:59.232 "compare": false, 00:09:59.232 "compare_and_write": false, 00:09:59.232 "abort": true, 00:09:59.232 "seek_hole": false, 00:09:59.232 "seek_data": false, 00:09:59.232 "copy": true, 00:09:59.232 "nvme_iov_md": false 00:09:59.232 }, 00:09:59.232 "memory_domains": [ 00:09:59.232 { 00:09:59.232 "dma_device_id": "system", 00:09:59.232 "dma_device_type": 1 00:09:59.232 }, 00:09:59.232 { 00:09:59.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.232 "dma_device_type": 2 00:09:59.232 } 00:09:59.232 ], 00:09:59.232 "driver_specific": {} 00:09:59.232 } 00:09:59.232 ] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.232 BaseBdev3 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.232 [ 00:09:59.232 { 00:09:59.232 "name": "BaseBdev3", 00:09:59.232 "aliases": [ 00:09:59.232 "12dda680-5a7b-4cc4-9a35-18d7c87dc791" 00:09:59.232 ], 00:09:59.232 "product_name": "Malloc disk", 00:09:59.232 "block_size": 512, 00:09:59.232 "num_blocks": 65536, 00:09:59.232 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:09:59.232 "assigned_rate_limits": { 00:09:59.232 "rw_ios_per_sec": 0, 00:09:59.232 "rw_mbytes_per_sec": 0, 00:09:59.232 "r_mbytes_per_sec": 0, 00:09:59.232 "w_mbytes_per_sec": 0 00:09:59.232 }, 00:09:59.232 "claimed": false, 00:09:59.232 "zoned": false, 00:09:59.232 "supported_io_types": { 00:09:59.232 "read": true, 00:09:59.232 "write": true, 00:09:59.232 "unmap": true, 00:09:59.232 "flush": true, 00:09:59.232 "reset": true, 00:09:59.232 "nvme_admin": false, 00:09:59.232 "nvme_io": false, 00:09:59.232 "nvme_io_md": false, 00:09:59.232 "write_zeroes": true, 00:09:59.232 "zcopy": true, 00:09:59.232 "get_zone_info": false, 00:09:59.232 "zone_management": false, 00:09:59.232 "zone_append": false, 00:09:59.232 "compare": false, 00:09:59.232 "compare_and_write": false, 00:09:59.232 "abort": true, 00:09:59.232 "seek_hole": false, 00:09:59.232 "seek_data": false, 00:09:59.232 "copy": true, 00:09:59.232 "nvme_iov_md": false 00:09:59.232 }, 00:09:59.232 "memory_domains": [ 00:09:59.232 { 00:09:59.232 "dma_device_id": "system", 00:09:59.232 "dma_device_type": 1 00:09:59.232 }, 00:09:59.232 { 00:09:59.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.232 "dma_device_type": 2 00:09:59.232 } 00:09:59.232 ], 00:09:59.232 "driver_specific": {} 00:09:59.232 } 00:09:59.232 ] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.232 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.233 [2024-11-04 11:42:24.592373] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.233 [2024-11-04 11:42:24.592531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.233 [2024-11-04 11:42:24.592580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.233 [2024-11-04 11:42:24.594737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.233 "name": "Existed_Raid", 00:09:59.233 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:09:59.233 "strip_size_kb": 64, 00:09:59.233 "state": "configuring", 00:09:59.233 "raid_level": "concat", 00:09:59.233 "superblock": true, 00:09:59.233 "num_base_bdevs": 3, 00:09:59.233 "num_base_bdevs_discovered": 2, 00:09:59.233 "num_base_bdevs_operational": 3, 00:09:59.233 "base_bdevs_list": [ 00:09:59.233 { 00:09:59.233 "name": "BaseBdev1", 00:09:59.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.233 "is_configured": false, 00:09:59.233 "data_offset": 0, 00:09:59.233 "data_size": 0 00:09:59.233 }, 00:09:59.233 { 00:09:59.233 "name": "BaseBdev2", 00:09:59.233 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:09:59.233 "is_configured": true, 00:09:59.233 "data_offset": 2048, 00:09:59.233 "data_size": 63488 00:09:59.233 }, 00:09:59.233 { 00:09:59.233 "name": "BaseBdev3", 00:09:59.233 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:09:59.233 "is_configured": true, 00:09:59.233 "data_offset": 2048, 00:09:59.233 "data_size": 63488 00:09:59.233 } 00:09:59.233 ] 00:09:59.233 }' 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.233 11:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.802 [2024-11-04 11:42:25.051627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.802 "name": "Existed_Raid", 00:09:59.802 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:09:59.802 "strip_size_kb": 64, 00:09:59.802 "state": "configuring", 00:09:59.802 "raid_level": "concat", 00:09:59.802 "superblock": true, 00:09:59.802 "num_base_bdevs": 3, 00:09:59.802 "num_base_bdevs_discovered": 1, 00:09:59.802 "num_base_bdevs_operational": 3, 00:09:59.802 "base_bdevs_list": [ 00:09:59.802 { 00:09:59.802 "name": "BaseBdev1", 00:09:59.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.802 "is_configured": false, 00:09:59.802 "data_offset": 0, 00:09:59.802 "data_size": 0 00:09:59.802 }, 00:09:59.802 { 00:09:59.802 "name": null, 00:09:59.802 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:09:59.802 "is_configured": false, 00:09:59.802 "data_offset": 0, 00:09:59.802 "data_size": 63488 00:09:59.802 }, 00:09:59.802 { 00:09:59.802 "name": "BaseBdev3", 00:09:59.802 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:09:59.802 "is_configured": true, 00:09:59.802 "data_offset": 2048, 00:09:59.802 "data_size": 63488 00:09:59.802 } 00:09:59.802 ] 00:09:59.802 }' 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.802 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.063 [2024-11-04 11:42:25.566776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.063 BaseBdev1 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.063 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.323 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.323 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.323 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.323 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.323 [ 00:10:00.323 { 00:10:00.323 "name": "BaseBdev1", 00:10:00.323 "aliases": [ 00:10:00.323 "af97583f-b8ad-4e5f-97dc-981125e80b57" 00:10:00.323 ], 00:10:00.323 "product_name": "Malloc disk", 00:10:00.323 "block_size": 512, 00:10:00.323 "num_blocks": 65536, 00:10:00.323 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:00.323 "assigned_rate_limits": { 00:10:00.323 "rw_ios_per_sec": 0, 00:10:00.323 "rw_mbytes_per_sec": 0, 00:10:00.323 "r_mbytes_per_sec": 0, 00:10:00.323 "w_mbytes_per_sec": 0 00:10:00.324 }, 00:10:00.324 "claimed": true, 00:10:00.324 "claim_type": "exclusive_write", 00:10:00.324 "zoned": false, 00:10:00.324 "supported_io_types": { 00:10:00.324 "read": true, 00:10:00.324 "write": true, 00:10:00.324 "unmap": true, 00:10:00.324 "flush": true, 00:10:00.324 "reset": true, 00:10:00.324 "nvme_admin": false, 00:10:00.324 "nvme_io": false, 00:10:00.324 "nvme_io_md": false, 00:10:00.324 "write_zeroes": true, 00:10:00.324 "zcopy": true, 00:10:00.324 "get_zone_info": false, 00:10:00.324 "zone_management": false, 00:10:00.324 "zone_append": false, 00:10:00.324 "compare": false, 00:10:00.324 "compare_and_write": false, 00:10:00.324 "abort": true, 00:10:00.324 "seek_hole": false, 00:10:00.324 "seek_data": false, 00:10:00.324 "copy": true, 00:10:00.324 "nvme_iov_md": false 00:10:00.324 }, 00:10:00.324 "memory_domains": [ 00:10:00.324 { 00:10:00.324 "dma_device_id": "system", 00:10:00.324 "dma_device_type": 1 00:10:00.324 }, 00:10:00.324 { 00:10:00.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.324 "dma_device_type": 2 00:10:00.324 } 00:10:00.324 ], 00:10:00.324 "driver_specific": {} 00:10:00.324 } 00:10:00.324 ] 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.324 "name": "Existed_Raid", 00:10:00.324 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:00.324 "strip_size_kb": 64, 00:10:00.324 "state": "configuring", 00:10:00.324 "raid_level": "concat", 00:10:00.324 "superblock": true, 00:10:00.324 "num_base_bdevs": 3, 00:10:00.324 "num_base_bdevs_discovered": 2, 00:10:00.324 "num_base_bdevs_operational": 3, 00:10:00.324 "base_bdevs_list": [ 00:10:00.324 { 00:10:00.324 "name": "BaseBdev1", 00:10:00.324 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:00.324 "is_configured": true, 00:10:00.324 "data_offset": 2048, 00:10:00.324 "data_size": 63488 00:10:00.324 }, 00:10:00.324 { 00:10:00.324 "name": null, 00:10:00.324 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:00.324 "is_configured": false, 00:10:00.324 "data_offset": 0, 00:10:00.324 "data_size": 63488 00:10:00.324 }, 00:10:00.324 { 00:10:00.324 "name": "BaseBdev3", 00:10:00.324 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:00.324 "is_configured": true, 00:10:00.324 "data_offset": 2048, 00:10:00.324 "data_size": 63488 00:10:00.324 } 00:10:00.324 ] 00:10:00.324 }' 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.324 11:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.584 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.584 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.584 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.584 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 [2024-11-04 11:42:26.153876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.876 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.877 "name": "Existed_Raid", 00:10:00.877 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:00.877 "strip_size_kb": 64, 00:10:00.877 "state": "configuring", 00:10:00.877 "raid_level": "concat", 00:10:00.877 "superblock": true, 00:10:00.877 "num_base_bdevs": 3, 00:10:00.877 "num_base_bdevs_discovered": 1, 00:10:00.877 "num_base_bdevs_operational": 3, 00:10:00.877 "base_bdevs_list": [ 00:10:00.877 { 00:10:00.877 "name": "BaseBdev1", 00:10:00.877 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:00.877 "is_configured": true, 00:10:00.877 "data_offset": 2048, 00:10:00.877 "data_size": 63488 00:10:00.877 }, 00:10:00.877 { 00:10:00.877 "name": null, 00:10:00.877 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:00.877 "is_configured": false, 00:10:00.877 "data_offset": 0, 00:10:00.877 "data_size": 63488 00:10:00.877 }, 00:10:00.877 { 00:10:00.877 "name": null, 00:10:00.877 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:00.877 "is_configured": false, 00:10:00.877 "data_offset": 0, 00:10:00.877 "data_size": 63488 00:10:00.877 } 00:10:00.877 ] 00:10:00.877 }' 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.877 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.135 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.135 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.135 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.135 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.135 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.395 [2024-11-04 11:42:26.681050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.395 "name": "Existed_Raid", 00:10:01.395 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:01.395 "strip_size_kb": 64, 00:10:01.395 "state": "configuring", 00:10:01.395 "raid_level": "concat", 00:10:01.395 "superblock": true, 00:10:01.395 "num_base_bdevs": 3, 00:10:01.395 "num_base_bdevs_discovered": 2, 00:10:01.395 "num_base_bdevs_operational": 3, 00:10:01.395 "base_bdevs_list": [ 00:10:01.395 { 00:10:01.395 "name": "BaseBdev1", 00:10:01.395 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:01.395 "is_configured": true, 00:10:01.395 "data_offset": 2048, 00:10:01.395 "data_size": 63488 00:10:01.395 }, 00:10:01.395 { 00:10:01.395 "name": null, 00:10:01.395 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:01.395 "is_configured": false, 00:10:01.395 "data_offset": 0, 00:10:01.395 "data_size": 63488 00:10:01.395 }, 00:10:01.395 { 00:10:01.395 "name": "BaseBdev3", 00:10:01.395 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:01.395 "is_configured": true, 00:10:01.395 "data_offset": 2048, 00:10:01.395 "data_size": 63488 00:10:01.395 } 00:10:01.395 ] 00:10:01.395 }' 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.395 11:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.964 [2024-11-04 11:42:27.236130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.964 "name": "Existed_Raid", 00:10:01.964 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:01.964 "strip_size_kb": 64, 00:10:01.964 "state": "configuring", 00:10:01.964 "raid_level": "concat", 00:10:01.964 "superblock": true, 00:10:01.964 "num_base_bdevs": 3, 00:10:01.964 "num_base_bdevs_discovered": 1, 00:10:01.964 "num_base_bdevs_operational": 3, 00:10:01.964 "base_bdevs_list": [ 00:10:01.964 { 00:10:01.964 "name": null, 00:10:01.964 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:01.964 "is_configured": false, 00:10:01.964 "data_offset": 0, 00:10:01.964 "data_size": 63488 00:10:01.964 }, 00:10:01.964 { 00:10:01.964 "name": null, 00:10:01.964 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:01.964 "is_configured": false, 00:10:01.964 "data_offset": 0, 00:10:01.964 "data_size": 63488 00:10:01.964 }, 00:10:01.964 { 00:10:01.964 "name": "BaseBdev3", 00:10:01.964 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:01.964 "is_configured": true, 00:10:01.964 "data_offset": 2048, 00:10:01.964 "data_size": 63488 00:10:01.964 } 00:10:01.964 ] 00:10:01.964 }' 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.964 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.530 [2024-11-04 11:42:27.802199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.530 "name": "Existed_Raid", 00:10:02.530 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:02.530 "strip_size_kb": 64, 00:10:02.530 "state": "configuring", 00:10:02.530 "raid_level": "concat", 00:10:02.530 "superblock": true, 00:10:02.530 "num_base_bdevs": 3, 00:10:02.530 "num_base_bdevs_discovered": 2, 00:10:02.530 "num_base_bdevs_operational": 3, 00:10:02.530 "base_bdevs_list": [ 00:10:02.530 { 00:10:02.530 "name": null, 00:10:02.530 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:02.530 "is_configured": false, 00:10:02.530 "data_offset": 0, 00:10:02.530 "data_size": 63488 00:10:02.530 }, 00:10:02.530 { 00:10:02.530 "name": "BaseBdev2", 00:10:02.530 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:02.530 "is_configured": true, 00:10:02.530 "data_offset": 2048, 00:10:02.530 "data_size": 63488 00:10:02.530 }, 00:10:02.530 { 00:10:02.530 "name": "BaseBdev3", 00:10:02.530 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:02.530 "is_configured": true, 00:10:02.530 "data_offset": 2048, 00:10:02.530 "data_size": 63488 00:10:02.530 } 00:10:02.530 ] 00:10:02.530 }' 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.530 11:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.789 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.789 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.789 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.789 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:02.789 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.049 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u af97583f-b8ad-4e5f-97dc-981125e80b57 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 [2024-11-04 11:42:28.407905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.050 [2024-11-04 11:42:28.408328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.050 [2024-11-04 11:42:28.408386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:03.050 [2024-11-04 11:42:28.408745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.050 NewBaseBdev 00:10:03.050 [2024-11-04 11:42:28.408960] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.050 [2024-11-04 11:42:28.408975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:03.050 [2024-11-04 11:42:28.409130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 [ 00:10:03.050 { 00:10:03.050 "name": "NewBaseBdev", 00:10:03.050 "aliases": [ 00:10:03.050 "af97583f-b8ad-4e5f-97dc-981125e80b57" 00:10:03.050 ], 00:10:03.050 "product_name": "Malloc disk", 00:10:03.050 "block_size": 512, 00:10:03.050 "num_blocks": 65536, 00:10:03.050 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:03.050 "assigned_rate_limits": { 00:10:03.050 "rw_ios_per_sec": 0, 00:10:03.050 "rw_mbytes_per_sec": 0, 00:10:03.050 "r_mbytes_per_sec": 0, 00:10:03.050 "w_mbytes_per_sec": 0 00:10:03.050 }, 00:10:03.050 "claimed": true, 00:10:03.050 "claim_type": "exclusive_write", 00:10:03.050 "zoned": false, 00:10:03.050 "supported_io_types": { 00:10:03.050 "read": true, 00:10:03.050 "write": true, 00:10:03.050 "unmap": true, 00:10:03.050 "flush": true, 00:10:03.050 "reset": true, 00:10:03.050 "nvme_admin": false, 00:10:03.050 "nvme_io": false, 00:10:03.050 "nvme_io_md": false, 00:10:03.050 "write_zeroes": true, 00:10:03.050 "zcopy": true, 00:10:03.050 "get_zone_info": false, 00:10:03.050 "zone_management": false, 00:10:03.050 "zone_append": false, 00:10:03.050 "compare": false, 00:10:03.050 "compare_and_write": false, 00:10:03.050 "abort": true, 00:10:03.050 "seek_hole": false, 00:10:03.050 "seek_data": false, 00:10:03.050 "copy": true, 00:10:03.050 "nvme_iov_md": false 00:10:03.050 }, 00:10:03.050 "memory_domains": [ 00:10:03.050 { 00:10:03.050 "dma_device_id": "system", 00:10:03.050 "dma_device_type": 1 00:10:03.050 }, 00:10:03.050 { 00:10:03.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.050 "dma_device_type": 2 00:10:03.050 } 00:10:03.050 ], 00:10:03.050 "driver_specific": {} 00:10:03.050 } 00:10:03.050 ] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.050 "name": "Existed_Raid", 00:10:03.050 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:03.050 "strip_size_kb": 64, 00:10:03.050 "state": "online", 00:10:03.050 "raid_level": "concat", 00:10:03.050 "superblock": true, 00:10:03.050 "num_base_bdevs": 3, 00:10:03.050 "num_base_bdevs_discovered": 3, 00:10:03.050 "num_base_bdevs_operational": 3, 00:10:03.050 "base_bdevs_list": [ 00:10:03.050 { 00:10:03.050 "name": "NewBaseBdev", 00:10:03.050 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:03.050 "is_configured": true, 00:10:03.050 "data_offset": 2048, 00:10:03.050 "data_size": 63488 00:10:03.050 }, 00:10:03.050 { 00:10:03.050 "name": "BaseBdev2", 00:10:03.050 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:03.050 "is_configured": true, 00:10:03.050 "data_offset": 2048, 00:10:03.050 "data_size": 63488 00:10:03.050 }, 00:10:03.050 { 00:10:03.050 "name": "BaseBdev3", 00:10:03.050 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:03.050 "is_configured": true, 00:10:03.050 "data_offset": 2048, 00:10:03.050 "data_size": 63488 00:10:03.050 } 00:10:03.050 ] 00:10:03.050 }' 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.050 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.618 11:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.618 [2024-11-04 11:42:28.995258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.618 "name": "Existed_Raid", 00:10:03.618 "aliases": [ 00:10:03.618 "693e9342-f5fa-4def-8649-66d398b6592a" 00:10:03.618 ], 00:10:03.618 "product_name": "Raid Volume", 00:10:03.618 "block_size": 512, 00:10:03.618 "num_blocks": 190464, 00:10:03.618 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:03.618 "assigned_rate_limits": { 00:10:03.618 "rw_ios_per_sec": 0, 00:10:03.618 "rw_mbytes_per_sec": 0, 00:10:03.618 "r_mbytes_per_sec": 0, 00:10:03.618 "w_mbytes_per_sec": 0 00:10:03.618 }, 00:10:03.618 "claimed": false, 00:10:03.618 "zoned": false, 00:10:03.618 "supported_io_types": { 00:10:03.618 "read": true, 00:10:03.618 "write": true, 00:10:03.618 "unmap": true, 00:10:03.618 "flush": true, 00:10:03.618 "reset": true, 00:10:03.618 "nvme_admin": false, 00:10:03.618 "nvme_io": false, 00:10:03.618 "nvme_io_md": false, 00:10:03.618 "write_zeroes": true, 00:10:03.618 "zcopy": false, 00:10:03.618 "get_zone_info": false, 00:10:03.618 "zone_management": false, 00:10:03.618 "zone_append": false, 00:10:03.618 "compare": false, 00:10:03.618 "compare_and_write": false, 00:10:03.618 "abort": false, 00:10:03.618 "seek_hole": false, 00:10:03.618 "seek_data": false, 00:10:03.618 "copy": false, 00:10:03.618 "nvme_iov_md": false 00:10:03.618 }, 00:10:03.618 "memory_domains": [ 00:10:03.618 { 00:10:03.618 "dma_device_id": "system", 00:10:03.618 "dma_device_type": 1 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.618 "dma_device_type": 2 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "dma_device_id": "system", 00:10:03.618 "dma_device_type": 1 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.618 "dma_device_type": 2 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "dma_device_id": "system", 00:10:03.618 "dma_device_type": 1 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.618 "dma_device_type": 2 00:10:03.618 } 00:10:03.618 ], 00:10:03.618 "driver_specific": { 00:10:03.618 "raid": { 00:10:03.618 "uuid": "693e9342-f5fa-4def-8649-66d398b6592a", 00:10:03.618 "strip_size_kb": 64, 00:10:03.618 "state": "online", 00:10:03.618 "raid_level": "concat", 00:10:03.618 "superblock": true, 00:10:03.618 "num_base_bdevs": 3, 00:10:03.618 "num_base_bdevs_discovered": 3, 00:10:03.618 "num_base_bdevs_operational": 3, 00:10:03.618 "base_bdevs_list": [ 00:10:03.618 { 00:10:03.618 "name": "NewBaseBdev", 00:10:03.618 "uuid": "af97583f-b8ad-4e5f-97dc-981125e80b57", 00:10:03.618 "is_configured": true, 00:10:03.618 "data_offset": 2048, 00:10:03.618 "data_size": 63488 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "name": "BaseBdev2", 00:10:03.618 "uuid": "a3004a96-8da6-4629-8270-a090ab778efe", 00:10:03.618 "is_configured": true, 00:10:03.618 "data_offset": 2048, 00:10:03.618 "data_size": 63488 00:10:03.618 }, 00:10:03.618 { 00:10:03.618 "name": "BaseBdev3", 00:10:03.618 "uuid": "12dda680-5a7b-4cc4-9a35-18d7c87dc791", 00:10:03.618 "is_configured": true, 00:10:03.618 "data_offset": 2048, 00:10:03.618 "data_size": 63488 00:10:03.618 } 00:10:03.618 ] 00:10:03.618 } 00:10:03.618 } 00:10:03.618 }' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:03.618 BaseBdev2 00:10:03.618 BaseBdev3' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.618 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.878 [2024-11-04 11:42:29.282559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.878 [2024-11-04 11:42:29.282691] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.878 [2024-11-04 11:42:29.282830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.878 [2024-11-04 11:42:29.282942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.878 [2024-11-04 11:42:29.283007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66452 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66452 ']' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66452 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66452 00:10:03.878 killing process with pid 66452 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66452' 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66452 00:10:03.878 [2024-11-04 11:42:29.329939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.878 11:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66452 00:10:04.138 [2024-11-04 11:42:29.641959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.515 11:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.515 ************************************ 00:10:05.515 END TEST raid_state_function_test_sb 00:10:05.515 ************************************ 00:10:05.515 00:10:05.515 real 0m11.323s 00:10:05.515 user 0m17.970s 00:10:05.515 sys 0m2.024s 00:10:05.515 11:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.515 11:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.515 11:42:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:05.515 11:42:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:05.515 11:42:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.515 11:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.515 ************************************ 00:10:05.515 START TEST raid_superblock_test 00:10:05.515 ************************************ 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67083 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:05.515 11:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67083 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67083 ']' 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.516 11:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.516 [2024-11-04 11:42:31.028388] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:05.516 [2024-11-04 11:42:31.028635] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67083 ] 00:10:05.775 [2024-11-04 11:42:31.198713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.035 [2024-11-04 11:42:31.313527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.035 [2024-11-04 11:42:31.532963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.035 [2024-11-04 11:42:31.533103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.643 malloc1 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.643 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.643 [2024-11-04 11:42:31.949969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:06.643 [2024-11-04 11:42:31.950110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.643 [2024-11-04 11:42:31.950156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:06.643 [2024-11-04 11:42:31.950187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.644 [2024-11-04 11:42:31.952460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.644 [2024-11-04 11:42:31.952548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:06.644 pt1 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 malloc2 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 [2024-11-04 11:42:32.008594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:06.644 [2024-11-04 11:42:32.008666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.644 [2024-11-04 11:42:32.008691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:06.644 [2024-11-04 11:42:32.008701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.644 [2024-11-04 11:42:32.011018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.644 [2024-11-04 11:42:32.011115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:06.644 pt2 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 malloc3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 [2024-11-04 11:42:32.076031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:06.644 [2024-11-04 11:42:32.076205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.644 [2024-11-04 11:42:32.076272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:06.644 [2024-11-04 11:42:32.076318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.644 [2024-11-04 11:42:32.078751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.644 [2024-11-04 11:42:32.078834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:06.644 pt3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 [2024-11-04 11:42:32.088050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.644 [2024-11-04 11:42:32.090118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:06.644 [2024-11-04 11:42:32.090231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:06.644 [2024-11-04 11:42:32.090483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:06.644 [2024-11-04 11:42:32.090540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:06.644 [2024-11-04 11:42:32.090864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:06.644 [2024-11-04 11:42:32.091094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:06.644 [2024-11-04 11:42:32.091145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:06.644 [2024-11-04 11:42:32.091387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.644 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.644 "name": "raid_bdev1", 00:10:06.644 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:06.644 "strip_size_kb": 64, 00:10:06.644 "state": "online", 00:10:06.644 "raid_level": "concat", 00:10:06.644 "superblock": true, 00:10:06.644 "num_base_bdevs": 3, 00:10:06.644 "num_base_bdevs_discovered": 3, 00:10:06.644 "num_base_bdevs_operational": 3, 00:10:06.644 "base_bdevs_list": [ 00:10:06.644 { 00:10:06.644 "name": "pt1", 00:10:06.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.644 "is_configured": true, 00:10:06.644 "data_offset": 2048, 00:10:06.644 "data_size": 63488 00:10:06.644 }, 00:10:06.644 { 00:10:06.644 "name": "pt2", 00:10:06.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.644 "is_configured": true, 00:10:06.644 "data_offset": 2048, 00:10:06.644 "data_size": 63488 00:10:06.644 }, 00:10:06.644 { 00:10:06.645 "name": "pt3", 00:10:06.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.645 "is_configured": true, 00:10:06.645 "data_offset": 2048, 00:10:06.645 "data_size": 63488 00:10:06.645 } 00:10:06.645 ] 00:10:06.645 }' 00:10:06.645 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.645 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 [2024-11-04 11:42:32.547576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.219 "name": "raid_bdev1", 00:10:07.219 "aliases": [ 00:10:07.219 "683335a4-1ac5-4784-abf1-92a8d6703b07" 00:10:07.219 ], 00:10:07.219 "product_name": "Raid Volume", 00:10:07.219 "block_size": 512, 00:10:07.219 "num_blocks": 190464, 00:10:07.219 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:07.219 "assigned_rate_limits": { 00:10:07.219 "rw_ios_per_sec": 0, 00:10:07.219 "rw_mbytes_per_sec": 0, 00:10:07.219 "r_mbytes_per_sec": 0, 00:10:07.219 "w_mbytes_per_sec": 0 00:10:07.219 }, 00:10:07.219 "claimed": false, 00:10:07.219 "zoned": false, 00:10:07.219 "supported_io_types": { 00:10:07.219 "read": true, 00:10:07.219 "write": true, 00:10:07.219 "unmap": true, 00:10:07.219 "flush": true, 00:10:07.219 "reset": true, 00:10:07.219 "nvme_admin": false, 00:10:07.219 "nvme_io": false, 00:10:07.219 "nvme_io_md": false, 00:10:07.219 "write_zeroes": true, 00:10:07.219 "zcopy": false, 00:10:07.219 "get_zone_info": false, 00:10:07.219 "zone_management": false, 00:10:07.219 "zone_append": false, 00:10:07.219 "compare": false, 00:10:07.219 "compare_and_write": false, 00:10:07.219 "abort": false, 00:10:07.219 "seek_hole": false, 00:10:07.219 "seek_data": false, 00:10:07.219 "copy": false, 00:10:07.219 "nvme_iov_md": false 00:10:07.219 }, 00:10:07.219 "memory_domains": [ 00:10:07.219 { 00:10:07.219 "dma_device_id": "system", 00:10:07.219 "dma_device_type": 1 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.219 "dma_device_type": 2 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "dma_device_id": "system", 00:10:07.219 "dma_device_type": 1 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.219 "dma_device_type": 2 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "dma_device_id": "system", 00:10:07.219 "dma_device_type": 1 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.219 "dma_device_type": 2 00:10:07.219 } 00:10:07.219 ], 00:10:07.219 "driver_specific": { 00:10:07.219 "raid": { 00:10:07.219 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:07.219 "strip_size_kb": 64, 00:10:07.219 "state": "online", 00:10:07.219 "raid_level": "concat", 00:10:07.219 "superblock": true, 00:10:07.219 "num_base_bdevs": 3, 00:10:07.219 "num_base_bdevs_discovered": 3, 00:10:07.219 "num_base_bdevs_operational": 3, 00:10:07.219 "base_bdevs_list": [ 00:10:07.219 { 00:10:07.219 "name": "pt1", 00:10:07.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.219 "is_configured": true, 00:10:07.219 "data_offset": 2048, 00:10:07.219 "data_size": 63488 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "name": "pt2", 00:10:07.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.219 "is_configured": true, 00:10:07.219 "data_offset": 2048, 00:10:07.219 "data_size": 63488 00:10:07.219 }, 00:10:07.219 { 00:10:07.219 "name": "pt3", 00:10:07.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.219 "is_configured": true, 00:10:07.219 "data_offset": 2048, 00:10:07.219 "data_size": 63488 00:10:07.219 } 00:10:07.219 ] 00:10:07.219 } 00:10:07.219 } 00:10:07.219 }' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:07.219 pt2 00:10:07.219 pt3' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 [2024-11-04 11:42:32.799087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=683335a4-1ac5-4784-abf1-92a8d6703b07 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 683335a4-1ac5-4784-abf1-92a8d6703b07 ']' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 [2024-11-04 11:42:32.830759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.480 [2024-11-04 11:42:32.830799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.480 [2024-11-04 11:42:32.830888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.480 [2024-11-04 11:42:32.830951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.480 [2024-11-04 11:42:32.830962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.480 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 [2024-11-04 11:42:32.974594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.481 [2024-11-04 11:42:32.976636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.481 [2024-11-04 11:42:32.976779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:07.481 [2024-11-04 11:42:32.976856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:07.481 [2024-11-04 11:42:32.976922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:07.481 [2024-11-04 11:42:32.976944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:07.481 [2024-11-04 11:42:32.976964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.481 [2024-11-04 11:42:32.976975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:07.481 request: 00:10:07.481 { 00:10:07.481 "name": "raid_bdev1", 00:10:07.481 "raid_level": "concat", 00:10:07.481 "base_bdevs": [ 00:10:07.481 "malloc1", 00:10:07.481 "malloc2", 00:10:07.481 "malloc3" 00:10:07.481 ], 00:10:07.481 "strip_size_kb": 64, 00:10:07.481 "superblock": false, 00:10:07.481 "method": "bdev_raid_create", 00:10:07.481 "req_id": 1 00:10:07.481 } 00:10:07.481 Got JSON-RPC error response 00:10:07.481 response: 00:10:07.481 { 00:10:07.481 "code": -17, 00:10:07.481 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.481 } 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 11:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 [2024-11-04 11:42:33.042463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.741 [2024-11-04 11:42:33.042672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.741 [2024-11-04 11:42:33.042738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:07.741 [2024-11-04 11:42:33.042800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.741 [2024-11-04 11:42:33.046104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.741 [2024-11-04 11:42:33.046216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.741 [2024-11-04 11:42:33.046407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:07.741 [2024-11-04 11:42:33.046595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.741 pt1 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.741 "name": "raid_bdev1", 00:10:07.741 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:07.741 "strip_size_kb": 64, 00:10:07.741 "state": "configuring", 00:10:07.741 "raid_level": "concat", 00:10:07.741 "superblock": true, 00:10:07.741 "num_base_bdevs": 3, 00:10:07.741 "num_base_bdevs_discovered": 1, 00:10:07.741 "num_base_bdevs_operational": 3, 00:10:07.741 "base_bdevs_list": [ 00:10:07.741 { 00:10:07.741 "name": "pt1", 00:10:07.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.741 "is_configured": true, 00:10:07.741 "data_offset": 2048, 00:10:07.741 "data_size": 63488 00:10:07.741 }, 00:10:07.741 { 00:10:07.741 "name": null, 00:10:07.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.741 "is_configured": false, 00:10:07.741 "data_offset": 2048, 00:10:07.741 "data_size": 63488 00:10:07.741 }, 00:10:07.741 { 00:10:07.741 "name": null, 00:10:07.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.741 "is_configured": false, 00:10:07.741 "data_offset": 2048, 00:10:07.741 "data_size": 63488 00:10:07.741 } 00:10:07.741 ] 00:10:07.741 }' 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.741 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.001 [2024-11-04 11:42:33.497739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.001 [2024-11-04 11:42:33.497823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.001 [2024-11-04 11:42:33.497852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:08.001 [2024-11-04 11:42:33.497862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.001 [2024-11-04 11:42:33.498365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.001 [2024-11-04 11:42:33.498385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.001 [2024-11-04 11:42:33.498492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.001 [2024-11-04 11:42:33.498517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.001 pt2 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.001 [2024-11-04 11:42:33.505728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.001 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.002 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.002 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.261 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.261 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.261 "name": "raid_bdev1", 00:10:08.261 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:08.261 "strip_size_kb": 64, 00:10:08.261 "state": "configuring", 00:10:08.261 "raid_level": "concat", 00:10:08.261 "superblock": true, 00:10:08.261 "num_base_bdevs": 3, 00:10:08.261 "num_base_bdevs_discovered": 1, 00:10:08.261 "num_base_bdevs_operational": 3, 00:10:08.261 "base_bdevs_list": [ 00:10:08.261 { 00:10:08.261 "name": "pt1", 00:10:08.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.261 "is_configured": true, 00:10:08.261 "data_offset": 2048, 00:10:08.261 "data_size": 63488 00:10:08.261 }, 00:10:08.261 { 00:10:08.261 "name": null, 00:10:08.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.261 "is_configured": false, 00:10:08.261 "data_offset": 0, 00:10:08.261 "data_size": 63488 00:10:08.261 }, 00:10:08.261 { 00:10:08.261 "name": null, 00:10:08.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.261 "is_configured": false, 00:10:08.261 "data_offset": 2048, 00:10:08.261 "data_size": 63488 00:10:08.261 } 00:10:08.261 ] 00:10:08.261 }' 00:10:08.261 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.261 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.521 [2024-11-04 11:42:33.956921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.521 [2024-11-04 11:42:33.957006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.521 [2024-11-04 11:42:33.957026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:08.521 [2024-11-04 11:42:33.957037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.521 [2024-11-04 11:42:33.957513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.521 [2024-11-04 11:42:33.957536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.521 [2024-11-04 11:42:33.957622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.521 [2024-11-04 11:42:33.957647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.521 pt2 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.521 [2024-11-04 11:42:33.964882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.521 [2024-11-04 11:42:33.964939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.521 [2024-11-04 11:42:33.964955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:08.521 [2024-11-04 11:42:33.964965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.521 [2024-11-04 11:42:33.965344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.521 [2024-11-04 11:42:33.965365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.521 [2024-11-04 11:42:33.965448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:08.521 [2024-11-04 11:42:33.965471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.521 [2024-11-04 11:42:33.965597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.521 [2024-11-04 11:42:33.965609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.521 [2024-11-04 11:42:33.965869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:08.521 [2024-11-04 11:42:33.966021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.521 [2024-11-04 11:42:33.966029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:08.521 [2024-11-04 11:42:33.966171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.521 pt3 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.521 11:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.521 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.521 "name": "raid_bdev1", 00:10:08.521 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:08.521 "strip_size_kb": 64, 00:10:08.521 "state": "online", 00:10:08.521 "raid_level": "concat", 00:10:08.521 "superblock": true, 00:10:08.521 "num_base_bdevs": 3, 00:10:08.521 "num_base_bdevs_discovered": 3, 00:10:08.521 "num_base_bdevs_operational": 3, 00:10:08.521 "base_bdevs_list": [ 00:10:08.521 { 00:10:08.521 "name": "pt1", 00:10:08.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.521 "is_configured": true, 00:10:08.521 "data_offset": 2048, 00:10:08.521 "data_size": 63488 00:10:08.521 }, 00:10:08.521 { 00:10:08.521 "name": "pt2", 00:10:08.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.521 "is_configured": true, 00:10:08.521 "data_offset": 2048, 00:10:08.521 "data_size": 63488 00:10:08.521 }, 00:10:08.521 { 00:10:08.521 "name": "pt3", 00:10:08.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.521 "is_configured": true, 00:10:08.521 "data_offset": 2048, 00:10:08.521 "data_size": 63488 00:10:08.521 } 00:10:08.521 ] 00:10:08.521 }' 00:10:08.521 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.521 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.091 [2024-11-04 11:42:34.456580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.091 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.091 "name": "raid_bdev1", 00:10:09.091 "aliases": [ 00:10:09.091 "683335a4-1ac5-4784-abf1-92a8d6703b07" 00:10:09.091 ], 00:10:09.091 "product_name": "Raid Volume", 00:10:09.091 "block_size": 512, 00:10:09.091 "num_blocks": 190464, 00:10:09.091 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:09.091 "assigned_rate_limits": { 00:10:09.091 "rw_ios_per_sec": 0, 00:10:09.091 "rw_mbytes_per_sec": 0, 00:10:09.091 "r_mbytes_per_sec": 0, 00:10:09.091 "w_mbytes_per_sec": 0 00:10:09.091 }, 00:10:09.091 "claimed": false, 00:10:09.091 "zoned": false, 00:10:09.091 "supported_io_types": { 00:10:09.091 "read": true, 00:10:09.091 "write": true, 00:10:09.091 "unmap": true, 00:10:09.091 "flush": true, 00:10:09.091 "reset": true, 00:10:09.091 "nvme_admin": false, 00:10:09.091 "nvme_io": false, 00:10:09.091 "nvme_io_md": false, 00:10:09.092 "write_zeroes": true, 00:10:09.092 "zcopy": false, 00:10:09.092 "get_zone_info": false, 00:10:09.092 "zone_management": false, 00:10:09.092 "zone_append": false, 00:10:09.092 "compare": false, 00:10:09.092 "compare_and_write": false, 00:10:09.092 "abort": false, 00:10:09.092 "seek_hole": false, 00:10:09.092 "seek_data": false, 00:10:09.092 "copy": false, 00:10:09.092 "nvme_iov_md": false 00:10:09.092 }, 00:10:09.092 "memory_domains": [ 00:10:09.092 { 00:10:09.092 "dma_device_id": "system", 00:10:09.092 "dma_device_type": 1 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.092 "dma_device_type": 2 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "dma_device_id": "system", 00:10:09.092 "dma_device_type": 1 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.092 "dma_device_type": 2 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "dma_device_id": "system", 00:10:09.092 "dma_device_type": 1 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.092 "dma_device_type": 2 00:10:09.092 } 00:10:09.092 ], 00:10:09.092 "driver_specific": { 00:10:09.092 "raid": { 00:10:09.092 "uuid": "683335a4-1ac5-4784-abf1-92a8d6703b07", 00:10:09.092 "strip_size_kb": 64, 00:10:09.092 "state": "online", 00:10:09.092 "raid_level": "concat", 00:10:09.092 "superblock": true, 00:10:09.092 "num_base_bdevs": 3, 00:10:09.092 "num_base_bdevs_discovered": 3, 00:10:09.092 "num_base_bdevs_operational": 3, 00:10:09.092 "base_bdevs_list": [ 00:10:09.092 { 00:10:09.092 "name": "pt1", 00:10:09.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.092 "is_configured": true, 00:10:09.092 "data_offset": 2048, 00:10:09.092 "data_size": 63488 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "name": "pt2", 00:10:09.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.092 "is_configured": true, 00:10:09.092 "data_offset": 2048, 00:10:09.092 "data_size": 63488 00:10:09.092 }, 00:10:09.092 { 00:10:09.092 "name": "pt3", 00:10:09.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.092 "is_configured": true, 00:10:09.092 "data_offset": 2048, 00:10:09.092 "data_size": 63488 00:10:09.092 } 00:10:09.092 ] 00:10:09.092 } 00:10:09.092 } 00:10:09.092 }' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:09.092 pt2 00:10:09.092 pt3' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.092 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:09.355 [2024-11-04 11:42:34.751979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 683335a4-1ac5-4784-abf1-92a8d6703b07 '!=' 683335a4-1ac5-4784-abf1-92a8d6703b07 ']' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67083 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67083 ']' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67083 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67083 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67083' 00:10:09.355 killing process with pid 67083 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67083 00:10:09.355 [2024-11-04 11:42:34.838200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:09.355 11:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67083 00:10:09.355 [2024-11-04 11:42:34.838427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.355 [2024-11-04 11:42:34.838501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.355 [2024-11-04 11:42:34.838517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:09.965 [2024-11-04 11:42:35.172917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.903 11:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:10.903 00:10:10.903 real 0m5.441s 00:10:10.903 user 0m7.769s 00:10:10.903 sys 0m0.923s 00:10:10.903 11:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.903 11:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.903 ************************************ 00:10:10.903 END TEST raid_superblock_test 00:10:10.903 ************************************ 00:10:11.163 11:42:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:11.163 11:42:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:11.163 11:42:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.163 11:42:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.163 ************************************ 00:10:11.163 START TEST raid_read_error_test 00:10:11.163 ************************************ 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Vy2nVDeiW3 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67342 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67342 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67342 ']' 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:11.163 11:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.163 [2024-11-04 11:42:36.560971] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:11.163 [2024-11-04 11:42:36.561091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67342 ] 00:10:11.422 [2024-11-04 11:42:36.736338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.422 [2024-11-04 11:42:36.856615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.681 [2024-11-04 11:42:37.060861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.681 [2024-11-04 11:42:37.060930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.940 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 BaseBdev1_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 true 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 [2024-11-04 11:42:37.491511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.199 [2024-11-04 11:42:37.491570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.199 [2024-11-04 11:42:37.491591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:12.199 [2024-11-04 11:42:37.491603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.199 [2024-11-04 11:42:37.493968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.199 [2024-11-04 11:42:37.494011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.199 BaseBdev1 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 BaseBdev2_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 true 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 [2024-11-04 11:42:37.558233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:12.199 [2024-11-04 11:42:37.558304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.199 [2024-11-04 11:42:37.558325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:12.199 [2024-11-04 11:42:37.558337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.199 [2024-11-04 11:42:37.560803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.199 [2024-11-04 11:42:37.560854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:12.199 BaseBdev2 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 BaseBdev3_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 true 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 [2024-11-04 11:42:37.637653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:12.199 [2024-11-04 11:42:37.637713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.199 [2024-11-04 11:42:37.637734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:12.199 [2024-11-04 11:42:37.637745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.199 [2024-11-04 11:42:37.640052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.199 [2024-11-04 11:42:37.640107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:12.199 BaseBdev3 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 [2024-11-04 11:42:37.649700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.199 [2024-11-04 11:42:37.651534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.199 [2024-11-04 11:42:37.651626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.199 [2024-11-04 11:42:37.651891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.199 [2024-11-04 11:42:37.651911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:12.199 [2024-11-04 11:42:37.652214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:12.199 [2024-11-04 11:42:37.652409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.199 [2024-11-04 11:42:37.652425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:12.199 [2024-11-04 11:42:37.652617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.199 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.200 "name": "raid_bdev1", 00:10:12.200 "uuid": "5b6ea7a9-5866-479d-8474-d4fa0d3f0b89", 00:10:12.200 "strip_size_kb": 64, 00:10:12.200 "state": "online", 00:10:12.200 "raid_level": "concat", 00:10:12.200 "superblock": true, 00:10:12.200 "num_base_bdevs": 3, 00:10:12.200 "num_base_bdevs_discovered": 3, 00:10:12.200 "num_base_bdevs_operational": 3, 00:10:12.200 "base_bdevs_list": [ 00:10:12.200 { 00:10:12.200 "name": "BaseBdev1", 00:10:12.200 "uuid": "1f8230ab-9f12-5038-bde7-612369c12137", 00:10:12.200 "is_configured": true, 00:10:12.200 "data_offset": 2048, 00:10:12.200 "data_size": 63488 00:10:12.200 }, 00:10:12.200 { 00:10:12.200 "name": "BaseBdev2", 00:10:12.200 "uuid": "f5074160-b2af-5a37-b068-f2991b79a672", 00:10:12.200 "is_configured": true, 00:10:12.200 "data_offset": 2048, 00:10:12.200 "data_size": 63488 00:10:12.200 }, 00:10:12.200 { 00:10:12.200 "name": "BaseBdev3", 00:10:12.200 "uuid": "7fc9d57a-7f4c-5cab-97fe-1e486fe09648", 00:10:12.200 "is_configured": true, 00:10:12.200 "data_offset": 2048, 00:10:12.200 "data_size": 63488 00:10:12.200 } 00:10:12.200 ] 00:10:12.200 }' 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.200 11:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.766 11:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:12.766 11:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:12.766 [2024-11-04 11:42:38.201970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.703 "name": "raid_bdev1", 00:10:13.703 "uuid": "5b6ea7a9-5866-479d-8474-d4fa0d3f0b89", 00:10:13.703 "strip_size_kb": 64, 00:10:13.703 "state": "online", 00:10:13.703 "raid_level": "concat", 00:10:13.703 "superblock": true, 00:10:13.703 "num_base_bdevs": 3, 00:10:13.703 "num_base_bdevs_discovered": 3, 00:10:13.703 "num_base_bdevs_operational": 3, 00:10:13.703 "base_bdevs_list": [ 00:10:13.703 { 00:10:13.703 "name": "BaseBdev1", 00:10:13.703 "uuid": "1f8230ab-9f12-5038-bde7-612369c12137", 00:10:13.703 "is_configured": true, 00:10:13.703 "data_offset": 2048, 00:10:13.703 "data_size": 63488 00:10:13.703 }, 00:10:13.703 { 00:10:13.703 "name": "BaseBdev2", 00:10:13.703 "uuid": "f5074160-b2af-5a37-b068-f2991b79a672", 00:10:13.703 "is_configured": true, 00:10:13.703 "data_offset": 2048, 00:10:13.703 "data_size": 63488 00:10:13.703 }, 00:10:13.703 { 00:10:13.703 "name": "BaseBdev3", 00:10:13.703 "uuid": "7fc9d57a-7f4c-5cab-97fe-1e486fe09648", 00:10:13.703 "is_configured": true, 00:10:13.703 "data_offset": 2048, 00:10:13.703 "data_size": 63488 00:10:13.703 } 00:10:13.703 ] 00:10:13.703 }' 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.703 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.304 [2024-11-04 11:42:39.578697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.304 [2024-11-04 11:42:39.578802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.304 [2024-11-04 11:42:39.582083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.304 [2024-11-04 11:42:39.582198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.304 [2024-11-04 11:42:39.582267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.304 [2024-11-04 11:42:39.582349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:14.304 { 00:10:14.304 "results": [ 00:10:14.304 { 00:10:14.304 "job": "raid_bdev1", 00:10:14.304 "core_mask": "0x1", 00:10:14.304 "workload": "randrw", 00:10:14.304 "percentage": 50, 00:10:14.304 "status": "finished", 00:10:14.304 "queue_depth": 1, 00:10:14.304 "io_size": 131072, 00:10:14.304 "runtime": 1.377749, 00:10:14.304 "iops": 14706.597500705862, 00:10:14.304 "mibps": 1838.3246875882328, 00:10:14.304 "io_failed": 1, 00:10:14.304 "io_timeout": 0, 00:10:14.304 "avg_latency_us": 94.4925496101807, 00:10:14.304 "min_latency_us": 26.494323144104804, 00:10:14.304 "max_latency_us": 1624.0908296943232 00:10:14.304 } 00:10:14.304 ], 00:10:14.304 "core_count": 1 00:10:14.304 } 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67342 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67342 ']' 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67342 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67342 00:10:14.304 killing process with pid 67342 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67342' 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67342 00:10:14.304 11:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67342 00:10:14.304 [2024-11-04 11:42:39.625994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.561 [2024-11-04 11:42:39.868437] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Vy2nVDeiW3 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:15.942 ************************************ 00:10:15.942 END TEST raid_read_error_test 00:10:15.942 ************************************ 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:15.942 00:10:15.942 real 0m4.659s 00:10:15.942 user 0m5.565s 00:10:15.942 sys 0m0.557s 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.942 11:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.942 11:42:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:15.942 11:42:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:15.942 11:42:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.942 11:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.942 ************************************ 00:10:15.942 START TEST raid_write_error_test 00:10:15.942 ************************************ 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BaJoz1xJtq 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67482 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67482 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67482 ']' 00:10:15.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.942 11:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.942 [2024-11-04 11:42:41.280031] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:15.942 [2024-11-04 11:42:41.280186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67482 ] 00:10:15.942 [2024-11-04 11:42:41.453752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.201 [2024-11-04 11:42:41.573043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.460 [2024-11-04 11:42:41.779202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.460 [2024-11-04 11:42:41.779234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.720 BaseBdev1_malloc 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.720 true 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.720 [2024-11-04 11:42:42.212883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.720 [2024-11-04 11:42:42.213022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.720 [2024-11-04 11:42:42.213060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.720 [2024-11-04 11:42:42.213077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.720 [2024-11-04 11:42:42.215521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.720 [2024-11-04 11:42:42.215564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.720 BaseBdev1 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.720 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 BaseBdev2_malloc 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 true 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 [2024-11-04 11:42:42.281553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.980 [2024-11-04 11:42:42.281614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.980 [2024-11-04 11:42:42.281637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.980 [2024-11-04 11:42:42.281650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.980 [2024-11-04 11:42:42.283957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.980 [2024-11-04 11:42:42.284002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.980 BaseBdev2 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 BaseBdev3_malloc 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 true 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.980 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.980 [2024-11-04 11:42:42.358194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.980 [2024-11-04 11:42:42.358298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.980 [2024-11-04 11:42:42.358327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.980 [2024-11-04 11:42:42.358342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.981 [2024-11-04 11:42:42.360587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.981 [2024-11-04 11:42:42.360629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.981 BaseBdev3 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.981 [2024-11-04 11:42:42.370243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.981 [2024-11-04 11:42:42.372155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.981 [2024-11-04 11:42:42.372247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.981 [2024-11-04 11:42:42.372475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.981 [2024-11-04 11:42:42.372487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.981 [2024-11-04 11:42:42.372743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:16.981 [2024-11-04 11:42:42.372943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.981 [2024-11-04 11:42:42.372960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:16.981 [2024-11-04 11:42:42.373118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.981 "name": "raid_bdev1", 00:10:16.981 "uuid": "2ec57ca1-e46c-49f4-9e5b-ec3de99541ca", 00:10:16.981 "strip_size_kb": 64, 00:10:16.981 "state": "online", 00:10:16.981 "raid_level": "concat", 00:10:16.981 "superblock": true, 00:10:16.981 "num_base_bdevs": 3, 00:10:16.981 "num_base_bdevs_discovered": 3, 00:10:16.981 "num_base_bdevs_operational": 3, 00:10:16.981 "base_bdevs_list": [ 00:10:16.981 { 00:10:16.981 "name": "BaseBdev1", 00:10:16.981 "uuid": "b2f569e4-a1e9-54ca-bf63-ec29191e9397", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 }, 00:10:16.981 { 00:10:16.981 "name": "BaseBdev2", 00:10:16.981 "uuid": "527b001f-fd52-5d83-9761-b3ea80a41b42", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 }, 00:10:16.981 { 00:10:16.981 "name": "BaseBdev3", 00:10:16.981 "uuid": "518bef66-8ef0-5442-b17b-ff03e9e059c2", 00:10:16.981 "is_configured": true, 00:10:16.981 "data_offset": 2048, 00:10:16.981 "data_size": 63488 00:10:16.981 } 00:10:16.981 ] 00:10:16.981 }' 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.981 11:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.550 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.550 11:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.550 [2024-11-04 11:42:42.910608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:18.486 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:18.486 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.487 "name": "raid_bdev1", 00:10:18.487 "uuid": "2ec57ca1-e46c-49f4-9e5b-ec3de99541ca", 00:10:18.487 "strip_size_kb": 64, 00:10:18.487 "state": "online", 00:10:18.487 "raid_level": "concat", 00:10:18.487 "superblock": true, 00:10:18.487 "num_base_bdevs": 3, 00:10:18.487 "num_base_bdevs_discovered": 3, 00:10:18.487 "num_base_bdevs_operational": 3, 00:10:18.487 "base_bdevs_list": [ 00:10:18.487 { 00:10:18.487 "name": "BaseBdev1", 00:10:18.487 "uuid": "b2f569e4-a1e9-54ca-bf63-ec29191e9397", 00:10:18.487 "is_configured": true, 00:10:18.487 "data_offset": 2048, 00:10:18.487 "data_size": 63488 00:10:18.487 }, 00:10:18.487 { 00:10:18.487 "name": "BaseBdev2", 00:10:18.487 "uuid": "527b001f-fd52-5d83-9761-b3ea80a41b42", 00:10:18.487 "is_configured": true, 00:10:18.487 "data_offset": 2048, 00:10:18.487 "data_size": 63488 00:10:18.487 }, 00:10:18.487 { 00:10:18.487 "name": "BaseBdev3", 00:10:18.487 "uuid": "518bef66-8ef0-5442-b17b-ff03e9e059c2", 00:10:18.487 "is_configured": true, 00:10:18.487 "data_offset": 2048, 00:10:18.487 "data_size": 63488 00:10:18.487 } 00:10:18.487 ] 00:10:18.487 }' 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.487 11:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.056 [2024-11-04 11:42:44.286974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.056 [2024-11-04 11:42:44.287006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.056 [2024-11-04 11:42:44.290008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.056 [2024-11-04 11:42:44.290059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.056 [2024-11-04 11:42:44.290099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.056 [2024-11-04 11:42:44.290112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:19.056 { 00:10:19.056 "results": [ 00:10:19.056 { 00:10:19.056 "job": "raid_bdev1", 00:10:19.056 "core_mask": "0x1", 00:10:19.056 "workload": "randrw", 00:10:19.056 "percentage": 50, 00:10:19.056 "status": "finished", 00:10:19.056 "queue_depth": 1, 00:10:19.056 "io_size": 131072, 00:10:19.056 "runtime": 1.377157, 00:10:19.056 "iops": 14875.573373261, 00:10:19.056 "mibps": 1859.446671657625, 00:10:19.056 "io_failed": 1, 00:10:19.056 "io_timeout": 0, 00:10:19.056 "avg_latency_us": 93.38275455539704, 00:10:19.056 "min_latency_us": 26.941484716157206, 00:10:19.056 "max_latency_us": 1452.380786026201 00:10:19.056 } 00:10:19.056 ], 00:10:19.056 "core_count": 1 00:10:19.056 } 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67482 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67482 ']' 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67482 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67482 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.056 killing process with pid 67482 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67482' 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67482 00:10:19.056 [2024-11-04 11:42:44.336698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.056 11:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67482 00:10:19.056 [2024-11-04 11:42:44.570875] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BaJoz1xJtq 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.435 ************************************ 00:10:20.435 END TEST raid_write_error_test 00:10:20.435 ************************************ 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:20.435 00:10:20.435 real 0m4.627s 00:10:20.435 user 0m5.523s 00:10:20.435 sys 0m0.559s 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.435 11:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 11:42:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:20.435 11:42:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:20.435 11:42:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:20.435 11:42:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.435 11:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.435 ************************************ 00:10:20.435 START TEST raid_state_function_test 00:10:20.435 ************************************ 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67626 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67626' 00:10:20.436 Process raid pid: 67626 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67626 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67626 ']' 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.436 11:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.695 [2024-11-04 11:42:45.970975] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:20.695 [2024-11-04 11:42:45.971177] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.695 [2024-11-04 11:42:46.145058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.955 [2024-11-04 11:42:46.271263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.214 [2024-11-04 11:42:46.484085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.214 [2024-11-04 11:42:46.484213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.474 [2024-11-04 11:42:46.811113] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.474 [2024-11-04 11:42:46.811265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.474 [2024-11-04 11:42:46.811286] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.474 [2024-11-04 11:42:46.811301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.474 [2024-11-04 11:42:46.811309] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.474 [2024-11-04 11:42:46.811321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.474 "name": "Existed_Raid", 00:10:21.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.474 "strip_size_kb": 0, 00:10:21.474 "state": "configuring", 00:10:21.474 "raid_level": "raid1", 00:10:21.474 "superblock": false, 00:10:21.474 "num_base_bdevs": 3, 00:10:21.474 "num_base_bdevs_discovered": 0, 00:10:21.474 "num_base_bdevs_operational": 3, 00:10:21.474 "base_bdevs_list": [ 00:10:21.474 { 00:10:21.474 "name": "BaseBdev1", 00:10:21.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.474 "is_configured": false, 00:10:21.474 "data_offset": 0, 00:10:21.474 "data_size": 0 00:10:21.474 }, 00:10:21.474 { 00:10:21.474 "name": "BaseBdev2", 00:10:21.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.474 "is_configured": false, 00:10:21.474 "data_offset": 0, 00:10:21.474 "data_size": 0 00:10:21.474 }, 00:10:21.474 { 00:10:21.474 "name": "BaseBdev3", 00:10:21.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.474 "is_configured": false, 00:10:21.474 "data_offset": 0, 00:10:21.474 "data_size": 0 00:10:21.474 } 00:10:21.474 ] 00:10:21.474 }' 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.474 11:42:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.734 [2024-11-04 11:42:47.246341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.734 [2024-11-04 11:42:47.246477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.734 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.994 [2024-11-04 11:42:47.258314] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.994 [2024-11-04 11:42:47.258423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.994 [2024-11-04 11:42:47.258472] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.994 [2024-11-04 11:42:47.258510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.994 [2024-11-04 11:42:47.258547] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.994 [2024-11-04 11:42:47.258597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.994 [2024-11-04 11:42:47.313361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.994 BaseBdev1 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.994 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.994 [ 00:10:21.994 { 00:10:21.994 "name": "BaseBdev1", 00:10:21.994 "aliases": [ 00:10:21.994 "c962d663-3b00-4e43-9e2e-f6203250bd38" 00:10:21.994 ], 00:10:21.994 "product_name": "Malloc disk", 00:10:21.994 "block_size": 512, 00:10:21.994 "num_blocks": 65536, 00:10:21.994 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:21.994 "assigned_rate_limits": { 00:10:21.994 "rw_ios_per_sec": 0, 00:10:21.994 "rw_mbytes_per_sec": 0, 00:10:21.994 "r_mbytes_per_sec": 0, 00:10:21.994 "w_mbytes_per_sec": 0 00:10:21.994 }, 00:10:21.994 "claimed": true, 00:10:21.994 "claim_type": "exclusive_write", 00:10:21.994 "zoned": false, 00:10:21.994 "supported_io_types": { 00:10:21.994 "read": true, 00:10:21.994 "write": true, 00:10:21.994 "unmap": true, 00:10:21.994 "flush": true, 00:10:21.994 "reset": true, 00:10:21.994 "nvme_admin": false, 00:10:21.994 "nvme_io": false, 00:10:21.994 "nvme_io_md": false, 00:10:21.994 "write_zeroes": true, 00:10:21.994 "zcopy": true, 00:10:21.994 "get_zone_info": false, 00:10:21.994 "zone_management": false, 00:10:21.994 "zone_append": false, 00:10:21.994 "compare": false, 00:10:21.994 "compare_and_write": false, 00:10:21.994 "abort": true, 00:10:21.994 "seek_hole": false, 00:10:21.994 "seek_data": false, 00:10:21.994 "copy": true, 00:10:21.994 "nvme_iov_md": false 00:10:21.994 }, 00:10:21.994 "memory_domains": [ 00:10:21.994 { 00:10:21.994 "dma_device_id": "system", 00:10:21.994 "dma_device_type": 1 00:10:21.994 }, 00:10:21.994 { 00:10:21.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.994 "dma_device_type": 2 00:10:21.994 } 00:10:21.994 ], 00:10:21.994 "driver_specific": {} 00:10:21.994 } 00:10:21.994 ] 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.995 "name": "Existed_Raid", 00:10:21.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.995 "strip_size_kb": 0, 00:10:21.995 "state": "configuring", 00:10:21.995 "raid_level": "raid1", 00:10:21.995 "superblock": false, 00:10:21.995 "num_base_bdevs": 3, 00:10:21.995 "num_base_bdevs_discovered": 1, 00:10:21.995 "num_base_bdevs_operational": 3, 00:10:21.995 "base_bdevs_list": [ 00:10:21.995 { 00:10:21.995 "name": "BaseBdev1", 00:10:21.995 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:21.995 "is_configured": true, 00:10:21.995 "data_offset": 0, 00:10:21.995 "data_size": 65536 00:10:21.995 }, 00:10:21.995 { 00:10:21.995 "name": "BaseBdev2", 00:10:21.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.995 "is_configured": false, 00:10:21.995 "data_offset": 0, 00:10:21.995 "data_size": 0 00:10:21.995 }, 00:10:21.995 { 00:10:21.995 "name": "BaseBdev3", 00:10:21.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.995 "is_configured": false, 00:10:21.995 "data_offset": 0, 00:10:21.995 "data_size": 0 00:10:21.995 } 00:10:21.995 ] 00:10:21.995 }' 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.995 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.263 [2024-11-04 11:42:47.764686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.263 [2024-11-04 11:42:47.764836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.263 [2024-11-04 11:42:47.776704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.263 [2024-11-04 11:42:47.778713] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.263 [2024-11-04 11:42:47.778814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.263 [2024-11-04 11:42:47.778857] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.263 [2024-11-04 11:42:47.778921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.263 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.523 "name": "Existed_Raid", 00:10:22.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.523 "strip_size_kb": 0, 00:10:22.523 "state": "configuring", 00:10:22.523 "raid_level": "raid1", 00:10:22.523 "superblock": false, 00:10:22.523 "num_base_bdevs": 3, 00:10:22.523 "num_base_bdevs_discovered": 1, 00:10:22.523 "num_base_bdevs_operational": 3, 00:10:22.523 "base_bdevs_list": [ 00:10:22.523 { 00:10:22.523 "name": "BaseBdev1", 00:10:22.523 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:22.523 "is_configured": true, 00:10:22.523 "data_offset": 0, 00:10:22.523 "data_size": 65536 00:10:22.523 }, 00:10:22.523 { 00:10:22.523 "name": "BaseBdev2", 00:10:22.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.523 "is_configured": false, 00:10:22.523 "data_offset": 0, 00:10:22.523 "data_size": 0 00:10:22.523 }, 00:10:22.523 { 00:10:22.523 "name": "BaseBdev3", 00:10:22.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.523 "is_configured": false, 00:10:22.523 "data_offset": 0, 00:10:22.523 "data_size": 0 00:10:22.523 } 00:10:22.523 ] 00:10:22.523 }' 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.523 11:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.781 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.781 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.781 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 [2024-11-04 11:42:48.327479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.040 BaseBdev2 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 [ 00:10:23.040 { 00:10:23.040 "name": "BaseBdev2", 00:10:23.040 "aliases": [ 00:10:23.040 "2c72244c-9d2a-435c-a29b-b536191dcb33" 00:10:23.040 ], 00:10:23.040 "product_name": "Malloc disk", 00:10:23.040 "block_size": 512, 00:10:23.040 "num_blocks": 65536, 00:10:23.040 "uuid": "2c72244c-9d2a-435c-a29b-b536191dcb33", 00:10:23.040 "assigned_rate_limits": { 00:10:23.040 "rw_ios_per_sec": 0, 00:10:23.040 "rw_mbytes_per_sec": 0, 00:10:23.040 "r_mbytes_per_sec": 0, 00:10:23.040 "w_mbytes_per_sec": 0 00:10:23.040 }, 00:10:23.040 "claimed": true, 00:10:23.040 "claim_type": "exclusive_write", 00:10:23.040 "zoned": false, 00:10:23.040 "supported_io_types": { 00:10:23.040 "read": true, 00:10:23.040 "write": true, 00:10:23.040 "unmap": true, 00:10:23.040 "flush": true, 00:10:23.040 "reset": true, 00:10:23.040 "nvme_admin": false, 00:10:23.040 "nvme_io": false, 00:10:23.040 "nvme_io_md": false, 00:10:23.040 "write_zeroes": true, 00:10:23.040 "zcopy": true, 00:10:23.040 "get_zone_info": false, 00:10:23.040 "zone_management": false, 00:10:23.040 "zone_append": false, 00:10:23.040 "compare": false, 00:10:23.040 "compare_and_write": false, 00:10:23.040 "abort": true, 00:10:23.040 "seek_hole": false, 00:10:23.040 "seek_data": false, 00:10:23.040 "copy": true, 00:10:23.040 "nvme_iov_md": false 00:10:23.040 }, 00:10:23.040 "memory_domains": [ 00:10:23.040 { 00:10:23.040 "dma_device_id": "system", 00:10:23.040 "dma_device_type": 1 00:10:23.040 }, 00:10:23.040 { 00:10:23.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.040 "dma_device_type": 2 00:10:23.040 } 00:10:23.040 ], 00:10:23.040 "driver_specific": {} 00:10:23.040 } 00:10:23.040 ] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.040 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.040 "name": "Existed_Raid", 00:10:23.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.041 "strip_size_kb": 0, 00:10:23.041 "state": "configuring", 00:10:23.041 "raid_level": "raid1", 00:10:23.041 "superblock": false, 00:10:23.041 "num_base_bdevs": 3, 00:10:23.041 "num_base_bdevs_discovered": 2, 00:10:23.041 "num_base_bdevs_operational": 3, 00:10:23.041 "base_bdevs_list": [ 00:10:23.041 { 00:10:23.041 "name": "BaseBdev1", 00:10:23.041 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:23.041 "is_configured": true, 00:10:23.041 "data_offset": 0, 00:10:23.041 "data_size": 65536 00:10:23.041 }, 00:10:23.041 { 00:10:23.041 "name": "BaseBdev2", 00:10:23.041 "uuid": "2c72244c-9d2a-435c-a29b-b536191dcb33", 00:10:23.041 "is_configured": true, 00:10:23.041 "data_offset": 0, 00:10:23.041 "data_size": 65536 00:10:23.041 }, 00:10:23.041 { 00:10:23.041 "name": "BaseBdev3", 00:10:23.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.041 "is_configured": false, 00:10:23.041 "data_offset": 0, 00:10:23.041 "data_size": 0 00:10:23.041 } 00:10:23.041 ] 00:10:23.041 }' 00:10:23.041 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.041 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.300 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:23.300 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.300 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 [2024-11-04 11:42:48.872786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.560 [2024-11-04 11:42:48.872929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:23.560 [2024-11-04 11:42:48.872963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:23.560 [2024-11-04 11:42:48.873321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:23.560 [2024-11-04 11:42:48.873575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:23.560 [2024-11-04 11:42:48.873622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:23.560 [2024-11-04 11:42:48.873987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.560 BaseBdev3 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 [ 00:10:23.560 { 00:10:23.560 "name": "BaseBdev3", 00:10:23.560 "aliases": [ 00:10:23.560 "f9763810-ceed-422e-a651-ff274349038c" 00:10:23.560 ], 00:10:23.560 "product_name": "Malloc disk", 00:10:23.560 "block_size": 512, 00:10:23.560 "num_blocks": 65536, 00:10:23.560 "uuid": "f9763810-ceed-422e-a651-ff274349038c", 00:10:23.560 "assigned_rate_limits": { 00:10:23.560 "rw_ios_per_sec": 0, 00:10:23.560 "rw_mbytes_per_sec": 0, 00:10:23.560 "r_mbytes_per_sec": 0, 00:10:23.560 "w_mbytes_per_sec": 0 00:10:23.560 }, 00:10:23.560 "claimed": true, 00:10:23.560 "claim_type": "exclusive_write", 00:10:23.560 "zoned": false, 00:10:23.560 "supported_io_types": { 00:10:23.560 "read": true, 00:10:23.560 "write": true, 00:10:23.560 "unmap": true, 00:10:23.560 "flush": true, 00:10:23.560 "reset": true, 00:10:23.560 "nvme_admin": false, 00:10:23.560 "nvme_io": false, 00:10:23.560 "nvme_io_md": false, 00:10:23.560 "write_zeroes": true, 00:10:23.560 "zcopy": true, 00:10:23.560 "get_zone_info": false, 00:10:23.560 "zone_management": false, 00:10:23.560 "zone_append": false, 00:10:23.560 "compare": false, 00:10:23.560 "compare_and_write": false, 00:10:23.560 "abort": true, 00:10:23.560 "seek_hole": false, 00:10:23.560 "seek_data": false, 00:10:23.560 "copy": true, 00:10:23.560 "nvme_iov_md": false 00:10:23.560 }, 00:10:23.560 "memory_domains": [ 00:10:23.560 { 00:10:23.560 "dma_device_id": "system", 00:10:23.560 "dma_device_type": 1 00:10:23.560 }, 00:10:23.560 { 00:10:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.560 "dma_device_type": 2 00:10:23.560 } 00:10:23.560 ], 00:10:23.560 "driver_specific": {} 00:10:23.560 } 00:10:23.560 ] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.560 "name": "Existed_Raid", 00:10:23.560 "uuid": "f83bb068-1c9c-4e0a-a430-c9b7a986796f", 00:10:23.560 "strip_size_kb": 0, 00:10:23.560 "state": "online", 00:10:23.560 "raid_level": "raid1", 00:10:23.560 "superblock": false, 00:10:23.560 "num_base_bdevs": 3, 00:10:23.560 "num_base_bdevs_discovered": 3, 00:10:23.560 "num_base_bdevs_operational": 3, 00:10:23.560 "base_bdevs_list": [ 00:10:23.560 { 00:10:23.560 "name": "BaseBdev1", 00:10:23.560 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 0, 00:10:23.560 "data_size": 65536 00:10:23.560 }, 00:10:23.560 { 00:10:23.560 "name": "BaseBdev2", 00:10:23.560 "uuid": "2c72244c-9d2a-435c-a29b-b536191dcb33", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 0, 00:10:23.560 "data_size": 65536 00:10:23.560 }, 00:10:23.560 { 00:10:23.560 "name": "BaseBdev3", 00:10:23.560 "uuid": "f9763810-ceed-422e-a651-ff274349038c", 00:10:23.560 "is_configured": true, 00:10:23.560 "data_offset": 0, 00:10:23.560 "data_size": 65536 00:10:23.560 } 00:10:23.560 ] 00:10:23.560 }' 00:10:23.560 11:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.561 11:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.820 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.820 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.820 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.820 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.820 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.821 [2024-11-04 11:42:49.288605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.821 "name": "Existed_Raid", 00:10:23.821 "aliases": [ 00:10:23.821 "f83bb068-1c9c-4e0a-a430-c9b7a986796f" 00:10:23.821 ], 00:10:23.821 "product_name": "Raid Volume", 00:10:23.821 "block_size": 512, 00:10:23.821 "num_blocks": 65536, 00:10:23.821 "uuid": "f83bb068-1c9c-4e0a-a430-c9b7a986796f", 00:10:23.821 "assigned_rate_limits": { 00:10:23.821 "rw_ios_per_sec": 0, 00:10:23.821 "rw_mbytes_per_sec": 0, 00:10:23.821 "r_mbytes_per_sec": 0, 00:10:23.821 "w_mbytes_per_sec": 0 00:10:23.821 }, 00:10:23.821 "claimed": false, 00:10:23.821 "zoned": false, 00:10:23.821 "supported_io_types": { 00:10:23.821 "read": true, 00:10:23.821 "write": true, 00:10:23.821 "unmap": false, 00:10:23.821 "flush": false, 00:10:23.821 "reset": true, 00:10:23.821 "nvme_admin": false, 00:10:23.821 "nvme_io": false, 00:10:23.821 "nvme_io_md": false, 00:10:23.821 "write_zeroes": true, 00:10:23.821 "zcopy": false, 00:10:23.821 "get_zone_info": false, 00:10:23.821 "zone_management": false, 00:10:23.821 "zone_append": false, 00:10:23.821 "compare": false, 00:10:23.821 "compare_and_write": false, 00:10:23.821 "abort": false, 00:10:23.821 "seek_hole": false, 00:10:23.821 "seek_data": false, 00:10:23.821 "copy": false, 00:10:23.821 "nvme_iov_md": false 00:10:23.821 }, 00:10:23.821 "memory_domains": [ 00:10:23.821 { 00:10:23.821 "dma_device_id": "system", 00:10:23.821 "dma_device_type": 1 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.821 "dma_device_type": 2 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "dma_device_id": "system", 00:10:23.821 "dma_device_type": 1 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.821 "dma_device_type": 2 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "dma_device_id": "system", 00:10:23.821 "dma_device_type": 1 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.821 "dma_device_type": 2 00:10:23.821 } 00:10:23.821 ], 00:10:23.821 "driver_specific": { 00:10:23.821 "raid": { 00:10:23.821 "uuid": "f83bb068-1c9c-4e0a-a430-c9b7a986796f", 00:10:23.821 "strip_size_kb": 0, 00:10:23.821 "state": "online", 00:10:23.821 "raid_level": "raid1", 00:10:23.821 "superblock": false, 00:10:23.821 "num_base_bdevs": 3, 00:10:23.821 "num_base_bdevs_discovered": 3, 00:10:23.821 "num_base_bdevs_operational": 3, 00:10:23.821 "base_bdevs_list": [ 00:10:23.821 { 00:10:23.821 "name": "BaseBdev1", 00:10:23.821 "uuid": "c962d663-3b00-4e43-9e2e-f6203250bd38", 00:10:23.821 "is_configured": true, 00:10:23.821 "data_offset": 0, 00:10:23.821 "data_size": 65536 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "name": "BaseBdev2", 00:10:23.821 "uuid": "2c72244c-9d2a-435c-a29b-b536191dcb33", 00:10:23.821 "is_configured": true, 00:10:23.821 "data_offset": 0, 00:10:23.821 "data_size": 65536 00:10:23.821 }, 00:10:23.821 { 00:10:23.821 "name": "BaseBdev3", 00:10:23.821 "uuid": "f9763810-ceed-422e-a651-ff274349038c", 00:10:23.821 "is_configured": true, 00:10:23.821 "data_offset": 0, 00:10:23.821 "data_size": 65536 00:10:23.821 } 00:10:23.821 ] 00:10:23.821 } 00:10:23.821 } 00:10:23.821 }' 00:10:23.821 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:24.081 BaseBdev2 00:10:24.081 BaseBdev3' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.081 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.082 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.082 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.082 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.082 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.082 [2024-11-04 11:42:49.540142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.353 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.353 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.354 "name": "Existed_Raid", 00:10:24.354 "uuid": "f83bb068-1c9c-4e0a-a430-c9b7a986796f", 00:10:24.354 "strip_size_kb": 0, 00:10:24.354 "state": "online", 00:10:24.354 "raid_level": "raid1", 00:10:24.354 "superblock": false, 00:10:24.354 "num_base_bdevs": 3, 00:10:24.354 "num_base_bdevs_discovered": 2, 00:10:24.354 "num_base_bdevs_operational": 2, 00:10:24.354 "base_bdevs_list": [ 00:10:24.354 { 00:10:24.354 "name": null, 00:10:24.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.354 "is_configured": false, 00:10:24.354 "data_offset": 0, 00:10:24.354 "data_size": 65536 00:10:24.354 }, 00:10:24.354 { 00:10:24.354 "name": "BaseBdev2", 00:10:24.354 "uuid": "2c72244c-9d2a-435c-a29b-b536191dcb33", 00:10:24.354 "is_configured": true, 00:10:24.354 "data_offset": 0, 00:10:24.354 "data_size": 65536 00:10:24.354 }, 00:10:24.354 { 00:10:24.354 "name": "BaseBdev3", 00:10:24.354 "uuid": "f9763810-ceed-422e-a651-ff274349038c", 00:10:24.354 "is_configured": true, 00:10:24.354 "data_offset": 0, 00:10:24.354 "data_size": 65536 00:10:24.354 } 00:10:24.354 ] 00:10:24.354 }' 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.354 11:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.614 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.614 [2024-11-04 11:42:50.089887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.873 [2024-11-04 11:42:50.248633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.873 [2024-11-04 11:42:50.248742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.873 [2024-11-04 11:42:50.349627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.873 [2024-11-04 11:42:50.349691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.873 [2024-11-04 11:42:50.349706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.873 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 BaseBdev2 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 [ 00:10:25.133 { 00:10:25.133 "name": "BaseBdev2", 00:10:25.133 "aliases": [ 00:10:25.133 "de6b76f9-4c71-43d2-91ec-a08ea15f80c5" 00:10:25.133 ], 00:10:25.133 "product_name": "Malloc disk", 00:10:25.133 "block_size": 512, 00:10:25.133 "num_blocks": 65536, 00:10:25.133 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:25.133 "assigned_rate_limits": { 00:10:25.133 "rw_ios_per_sec": 0, 00:10:25.133 "rw_mbytes_per_sec": 0, 00:10:25.133 "r_mbytes_per_sec": 0, 00:10:25.133 "w_mbytes_per_sec": 0 00:10:25.133 }, 00:10:25.133 "claimed": false, 00:10:25.133 "zoned": false, 00:10:25.133 "supported_io_types": { 00:10:25.133 "read": true, 00:10:25.133 "write": true, 00:10:25.133 "unmap": true, 00:10:25.133 "flush": true, 00:10:25.133 "reset": true, 00:10:25.133 "nvme_admin": false, 00:10:25.133 "nvme_io": false, 00:10:25.133 "nvme_io_md": false, 00:10:25.133 "write_zeroes": true, 00:10:25.133 "zcopy": true, 00:10:25.133 "get_zone_info": false, 00:10:25.133 "zone_management": false, 00:10:25.133 "zone_append": false, 00:10:25.133 "compare": false, 00:10:25.133 "compare_and_write": false, 00:10:25.133 "abort": true, 00:10:25.133 "seek_hole": false, 00:10:25.133 "seek_data": false, 00:10:25.133 "copy": true, 00:10:25.133 "nvme_iov_md": false 00:10:25.133 }, 00:10:25.133 "memory_domains": [ 00:10:25.133 { 00:10:25.133 "dma_device_id": "system", 00:10:25.133 "dma_device_type": 1 00:10:25.133 }, 00:10:25.133 { 00:10:25.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.133 "dma_device_type": 2 00:10:25.133 } 00:10:25.133 ], 00:10:25.133 "driver_specific": {} 00:10:25.133 } 00:10:25.133 ] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 BaseBdev3 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 [ 00:10:25.133 { 00:10:25.133 "name": "BaseBdev3", 00:10:25.133 "aliases": [ 00:10:25.133 "e07ca83e-6132-4e4d-920e-9e64e89f0746" 00:10:25.133 ], 00:10:25.133 "product_name": "Malloc disk", 00:10:25.133 "block_size": 512, 00:10:25.133 "num_blocks": 65536, 00:10:25.133 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:25.133 "assigned_rate_limits": { 00:10:25.133 "rw_ios_per_sec": 0, 00:10:25.133 "rw_mbytes_per_sec": 0, 00:10:25.133 "r_mbytes_per_sec": 0, 00:10:25.133 "w_mbytes_per_sec": 0 00:10:25.133 }, 00:10:25.133 "claimed": false, 00:10:25.133 "zoned": false, 00:10:25.133 "supported_io_types": { 00:10:25.133 "read": true, 00:10:25.133 "write": true, 00:10:25.133 "unmap": true, 00:10:25.133 "flush": true, 00:10:25.133 "reset": true, 00:10:25.133 "nvme_admin": false, 00:10:25.133 "nvme_io": false, 00:10:25.133 "nvme_io_md": false, 00:10:25.133 "write_zeroes": true, 00:10:25.133 "zcopy": true, 00:10:25.133 "get_zone_info": false, 00:10:25.133 "zone_management": false, 00:10:25.133 "zone_append": false, 00:10:25.133 "compare": false, 00:10:25.133 "compare_and_write": false, 00:10:25.133 "abort": true, 00:10:25.133 "seek_hole": false, 00:10:25.133 "seek_data": false, 00:10:25.133 "copy": true, 00:10:25.133 "nvme_iov_md": false 00:10:25.133 }, 00:10:25.133 "memory_domains": [ 00:10:25.133 { 00:10:25.133 "dma_device_id": "system", 00:10:25.133 "dma_device_type": 1 00:10:25.133 }, 00:10:25.133 { 00:10:25.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.133 "dma_device_type": 2 00:10:25.133 } 00:10:25.133 ], 00:10:25.133 "driver_specific": {} 00:10:25.133 } 00:10:25.133 ] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.133 [2024-11-04 11:42:50.546068] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.133 [2024-11-04 11:42:50.546124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.133 [2024-11-04 11:42:50.546158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.133 [2024-11-04 11:42:50.548512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.133 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.134 "name": "Existed_Raid", 00:10:25.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.134 "strip_size_kb": 0, 00:10:25.134 "state": "configuring", 00:10:25.134 "raid_level": "raid1", 00:10:25.134 "superblock": false, 00:10:25.134 "num_base_bdevs": 3, 00:10:25.134 "num_base_bdevs_discovered": 2, 00:10:25.134 "num_base_bdevs_operational": 3, 00:10:25.134 "base_bdevs_list": [ 00:10:25.134 { 00:10:25.134 "name": "BaseBdev1", 00:10:25.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.134 "is_configured": false, 00:10:25.134 "data_offset": 0, 00:10:25.134 "data_size": 0 00:10:25.134 }, 00:10:25.134 { 00:10:25.134 "name": "BaseBdev2", 00:10:25.134 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:25.134 "is_configured": true, 00:10:25.134 "data_offset": 0, 00:10:25.134 "data_size": 65536 00:10:25.134 }, 00:10:25.134 { 00:10:25.134 "name": "BaseBdev3", 00:10:25.134 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:25.134 "is_configured": true, 00:10:25.134 "data_offset": 0, 00:10:25.134 "data_size": 65536 00:10:25.134 } 00:10:25.134 ] 00:10:25.134 }' 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.134 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.702 11:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:25.702 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.702 11:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.702 [2024-11-04 11:42:51.005334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.702 "name": "Existed_Raid", 00:10:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.702 "strip_size_kb": 0, 00:10:25.702 "state": "configuring", 00:10:25.702 "raid_level": "raid1", 00:10:25.702 "superblock": false, 00:10:25.702 "num_base_bdevs": 3, 00:10:25.702 "num_base_bdevs_discovered": 1, 00:10:25.702 "num_base_bdevs_operational": 3, 00:10:25.702 "base_bdevs_list": [ 00:10:25.702 { 00:10:25.702 "name": "BaseBdev1", 00:10:25.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.702 "is_configured": false, 00:10:25.702 "data_offset": 0, 00:10:25.702 "data_size": 0 00:10:25.702 }, 00:10:25.702 { 00:10:25.702 "name": null, 00:10:25.702 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:25.702 "is_configured": false, 00:10:25.702 "data_offset": 0, 00:10:25.702 "data_size": 65536 00:10:25.702 }, 00:10:25.702 { 00:10:25.702 "name": "BaseBdev3", 00:10:25.702 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:25.702 "is_configured": true, 00:10:25.702 "data_offset": 0, 00:10:25.702 "data_size": 65536 00:10:25.702 } 00:10:25.702 ] 00:10:25.702 }' 00:10:25.702 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.703 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.967 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.967 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.967 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.236 [2024-11-04 11:42:51.536866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.236 BaseBdev1 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.236 [ 00:10:26.236 { 00:10:26.236 "name": "BaseBdev1", 00:10:26.236 "aliases": [ 00:10:26.236 "e9286de7-a793-48e9-9bd2-8f151c4e0b3f" 00:10:26.236 ], 00:10:26.236 "product_name": "Malloc disk", 00:10:26.236 "block_size": 512, 00:10:26.236 "num_blocks": 65536, 00:10:26.236 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:26.236 "assigned_rate_limits": { 00:10:26.236 "rw_ios_per_sec": 0, 00:10:26.236 "rw_mbytes_per_sec": 0, 00:10:26.236 "r_mbytes_per_sec": 0, 00:10:26.236 "w_mbytes_per_sec": 0 00:10:26.236 }, 00:10:26.236 "claimed": true, 00:10:26.236 "claim_type": "exclusive_write", 00:10:26.236 "zoned": false, 00:10:26.236 "supported_io_types": { 00:10:26.236 "read": true, 00:10:26.236 "write": true, 00:10:26.236 "unmap": true, 00:10:26.236 "flush": true, 00:10:26.236 "reset": true, 00:10:26.236 "nvme_admin": false, 00:10:26.236 "nvme_io": false, 00:10:26.236 "nvme_io_md": false, 00:10:26.236 "write_zeroes": true, 00:10:26.236 "zcopy": true, 00:10:26.236 "get_zone_info": false, 00:10:26.236 "zone_management": false, 00:10:26.236 "zone_append": false, 00:10:26.236 "compare": false, 00:10:26.236 "compare_and_write": false, 00:10:26.236 "abort": true, 00:10:26.236 "seek_hole": false, 00:10:26.236 "seek_data": false, 00:10:26.236 "copy": true, 00:10:26.236 "nvme_iov_md": false 00:10:26.236 }, 00:10:26.236 "memory_domains": [ 00:10:26.236 { 00:10:26.236 "dma_device_id": "system", 00:10:26.236 "dma_device_type": 1 00:10:26.236 }, 00:10:26.236 { 00:10:26.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.236 "dma_device_type": 2 00:10:26.236 } 00:10:26.236 ], 00:10:26.236 "driver_specific": {} 00:10:26.236 } 00:10:26.236 ] 00:10:26.236 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.237 "name": "Existed_Raid", 00:10:26.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.237 "strip_size_kb": 0, 00:10:26.237 "state": "configuring", 00:10:26.237 "raid_level": "raid1", 00:10:26.237 "superblock": false, 00:10:26.237 "num_base_bdevs": 3, 00:10:26.237 "num_base_bdevs_discovered": 2, 00:10:26.237 "num_base_bdevs_operational": 3, 00:10:26.237 "base_bdevs_list": [ 00:10:26.237 { 00:10:26.237 "name": "BaseBdev1", 00:10:26.237 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:26.237 "is_configured": true, 00:10:26.237 "data_offset": 0, 00:10:26.237 "data_size": 65536 00:10:26.237 }, 00:10:26.237 { 00:10:26.237 "name": null, 00:10:26.237 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:26.237 "is_configured": false, 00:10:26.237 "data_offset": 0, 00:10:26.237 "data_size": 65536 00:10:26.237 }, 00:10:26.237 { 00:10:26.237 "name": "BaseBdev3", 00:10:26.237 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:26.237 "is_configured": true, 00:10:26.237 "data_offset": 0, 00:10:26.237 "data_size": 65536 00:10:26.237 } 00:10:26.237 ] 00:10:26.237 }' 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.237 11:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.495 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.495 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.495 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.495 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.753 [2024-11-04 11:42:52.068234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.753 "name": "Existed_Raid", 00:10:26.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.753 "strip_size_kb": 0, 00:10:26.753 "state": "configuring", 00:10:26.753 "raid_level": "raid1", 00:10:26.753 "superblock": false, 00:10:26.753 "num_base_bdevs": 3, 00:10:26.753 "num_base_bdevs_discovered": 1, 00:10:26.753 "num_base_bdevs_operational": 3, 00:10:26.753 "base_bdevs_list": [ 00:10:26.753 { 00:10:26.753 "name": "BaseBdev1", 00:10:26.753 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:26.753 "is_configured": true, 00:10:26.753 "data_offset": 0, 00:10:26.753 "data_size": 65536 00:10:26.753 }, 00:10:26.753 { 00:10:26.753 "name": null, 00:10:26.753 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:26.753 "is_configured": false, 00:10:26.753 "data_offset": 0, 00:10:26.753 "data_size": 65536 00:10:26.753 }, 00:10:26.753 { 00:10:26.753 "name": null, 00:10:26.753 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:26.753 "is_configured": false, 00:10:26.753 "data_offset": 0, 00:10:26.753 "data_size": 65536 00:10:26.753 } 00:10:26.753 ] 00:10:26.753 }' 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.753 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.320 [2024-11-04 11:42:52.595392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.320 "name": "Existed_Raid", 00:10:27.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.320 "strip_size_kb": 0, 00:10:27.320 "state": "configuring", 00:10:27.320 "raid_level": "raid1", 00:10:27.320 "superblock": false, 00:10:27.320 "num_base_bdevs": 3, 00:10:27.320 "num_base_bdevs_discovered": 2, 00:10:27.320 "num_base_bdevs_operational": 3, 00:10:27.320 "base_bdevs_list": [ 00:10:27.320 { 00:10:27.320 "name": "BaseBdev1", 00:10:27.320 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:27.320 "is_configured": true, 00:10:27.320 "data_offset": 0, 00:10:27.320 "data_size": 65536 00:10:27.320 }, 00:10:27.320 { 00:10:27.320 "name": null, 00:10:27.320 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:27.320 "is_configured": false, 00:10:27.320 "data_offset": 0, 00:10:27.320 "data_size": 65536 00:10:27.320 }, 00:10:27.320 { 00:10:27.320 "name": "BaseBdev3", 00:10:27.320 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:27.320 "is_configured": true, 00:10:27.320 "data_offset": 0, 00:10:27.320 "data_size": 65536 00:10:27.320 } 00:10:27.320 ] 00:10:27.320 }' 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.320 11:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.579 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.838 [2024-11-04 11:42:53.102572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.838 "name": "Existed_Raid", 00:10:27.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.838 "strip_size_kb": 0, 00:10:27.838 "state": "configuring", 00:10:27.838 "raid_level": "raid1", 00:10:27.838 "superblock": false, 00:10:27.838 "num_base_bdevs": 3, 00:10:27.838 "num_base_bdevs_discovered": 1, 00:10:27.838 "num_base_bdevs_operational": 3, 00:10:27.838 "base_bdevs_list": [ 00:10:27.838 { 00:10:27.838 "name": null, 00:10:27.838 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:27.838 "is_configured": false, 00:10:27.838 "data_offset": 0, 00:10:27.838 "data_size": 65536 00:10:27.838 }, 00:10:27.838 { 00:10:27.838 "name": null, 00:10:27.838 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:27.838 "is_configured": false, 00:10:27.838 "data_offset": 0, 00:10:27.838 "data_size": 65536 00:10:27.838 }, 00:10:27.838 { 00:10:27.838 "name": "BaseBdev3", 00:10:27.838 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:27.838 "is_configured": true, 00:10:27.838 "data_offset": 0, 00:10:27.838 "data_size": 65536 00:10:27.838 } 00:10:27.838 ] 00:10:27.838 }' 00:10:27.838 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.839 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.406 [2024-11-04 11:42:53.680025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.406 "name": "Existed_Raid", 00:10:28.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.406 "strip_size_kb": 0, 00:10:28.406 "state": "configuring", 00:10:28.406 "raid_level": "raid1", 00:10:28.406 "superblock": false, 00:10:28.406 "num_base_bdevs": 3, 00:10:28.406 "num_base_bdevs_discovered": 2, 00:10:28.406 "num_base_bdevs_operational": 3, 00:10:28.406 "base_bdevs_list": [ 00:10:28.406 { 00:10:28.406 "name": null, 00:10:28.406 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:28.406 "is_configured": false, 00:10:28.406 "data_offset": 0, 00:10:28.406 "data_size": 65536 00:10:28.406 }, 00:10:28.406 { 00:10:28.406 "name": "BaseBdev2", 00:10:28.406 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:28.406 "is_configured": true, 00:10:28.406 "data_offset": 0, 00:10:28.406 "data_size": 65536 00:10:28.406 }, 00:10:28.406 { 00:10:28.406 "name": "BaseBdev3", 00:10:28.406 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:28.406 "is_configured": true, 00:10:28.406 "data_offset": 0, 00:10:28.406 "data_size": 65536 00:10:28.406 } 00:10:28.406 ] 00:10:28.406 }' 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.406 11:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.664 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e9286de7-a793-48e9-9bd2-8f151c4e0b3f 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.923 [2024-11-04 11:42:54.273610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:28.923 [2024-11-04 11:42:54.273676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.923 [2024-11-04 11:42:54.273685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:28.923 [2024-11-04 11:42:54.274022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:28.923 [2024-11-04 11:42:54.274233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.923 [2024-11-04 11:42:54.274258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:28.923 [2024-11-04 11:42:54.274563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.923 NewBaseBdev 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:28.923 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.924 [ 00:10:28.924 { 00:10:28.924 "name": "NewBaseBdev", 00:10:28.924 "aliases": [ 00:10:28.924 "e9286de7-a793-48e9-9bd2-8f151c4e0b3f" 00:10:28.924 ], 00:10:28.924 "product_name": "Malloc disk", 00:10:28.924 "block_size": 512, 00:10:28.924 "num_blocks": 65536, 00:10:28.924 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:28.924 "assigned_rate_limits": { 00:10:28.924 "rw_ios_per_sec": 0, 00:10:28.924 "rw_mbytes_per_sec": 0, 00:10:28.924 "r_mbytes_per_sec": 0, 00:10:28.924 "w_mbytes_per_sec": 0 00:10:28.924 }, 00:10:28.924 "claimed": true, 00:10:28.924 "claim_type": "exclusive_write", 00:10:28.924 "zoned": false, 00:10:28.924 "supported_io_types": { 00:10:28.924 "read": true, 00:10:28.924 "write": true, 00:10:28.924 "unmap": true, 00:10:28.924 "flush": true, 00:10:28.924 "reset": true, 00:10:28.924 "nvme_admin": false, 00:10:28.924 "nvme_io": false, 00:10:28.924 "nvme_io_md": false, 00:10:28.924 "write_zeroes": true, 00:10:28.924 "zcopy": true, 00:10:28.924 "get_zone_info": false, 00:10:28.924 "zone_management": false, 00:10:28.924 "zone_append": false, 00:10:28.924 "compare": false, 00:10:28.924 "compare_and_write": false, 00:10:28.924 "abort": true, 00:10:28.924 "seek_hole": false, 00:10:28.924 "seek_data": false, 00:10:28.924 "copy": true, 00:10:28.924 "nvme_iov_md": false 00:10:28.924 }, 00:10:28.924 "memory_domains": [ 00:10:28.924 { 00:10:28.924 "dma_device_id": "system", 00:10:28.924 "dma_device_type": 1 00:10:28.924 }, 00:10:28.924 { 00:10:28.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.924 "dma_device_type": 2 00:10:28.924 } 00:10:28.924 ], 00:10:28.924 "driver_specific": {} 00:10:28.924 } 00:10:28.924 ] 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.924 "name": "Existed_Raid", 00:10:28.924 "uuid": "a844c6ad-9cdf-428c-8868-3d3e29e60e39", 00:10:28.924 "strip_size_kb": 0, 00:10:28.924 "state": "online", 00:10:28.924 "raid_level": "raid1", 00:10:28.924 "superblock": false, 00:10:28.924 "num_base_bdevs": 3, 00:10:28.924 "num_base_bdevs_discovered": 3, 00:10:28.924 "num_base_bdevs_operational": 3, 00:10:28.924 "base_bdevs_list": [ 00:10:28.924 { 00:10:28.924 "name": "NewBaseBdev", 00:10:28.924 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:28.924 "is_configured": true, 00:10:28.924 "data_offset": 0, 00:10:28.924 "data_size": 65536 00:10:28.924 }, 00:10:28.924 { 00:10:28.924 "name": "BaseBdev2", 00:10:28.924 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:28.924 "is_configured": true, 00:10:28.924 "data_offset": 0, 00:10:28.924 "data_size": 65536 00:10:28.924 }, 00:10:28.924 { 00:10:28.924 "name": "BaseBdev3", 00:10:28.924 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:28.924 "is_configured": true, 00:10:28.924 "data_offset": 0, 00:10:28.924 "data_size": 65536 00:10:28.924 } 00:10:28.924 ] 00:10:28.924 }' 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.924 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.536 [2024-11-04 11:42:54.753230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.536 "name": "Existed_Raid", 00:10:29.536 "aliases": [ 00:10:29.536 "a844c6ad-9cdf-428c-8868-3d3e29e60e39" 00:10:29.536 ], 00:10:29.536 "product_name": "Raid Volume", 00:10:29.536 "block_size": 512, 00:10:29.536 "num_blocks": 65536, 00:10:29.536 "uuid": "a844c6ad-9cdf-428c-8868-3d3e29e60e39", 00:10:29.536 "assigned_rate_limits": { 00:10:29.536 "rw_ios_per_sec": 0, 00:10:29.536 "rw_mbytes_per_sec": 0, 00:10:29.536 "r_mbytes_per_sec": 0, 00:10:29.536 "w_mbytes_per_sec": 0 00:10:29.536 }, 00:10:29.536 "claimed": false, 00:10:29.536 "zoned": false, 00:10:29.536 "supported_io_types": { 00:10:29.536 "read": true, 00:10:29.536 "write": true, 00:10:29.536 "unmap": false, 00:10:29.536 "flush": false, 00:10:29.536 "reset": true, 00:10:29.536 "nvme_admin": false, 00:10:29.536 "nvme_io": false, 00:10:29.536 "nvme_io_md": false, 00:10:29.536 "write_zeroes": true, 00:10:29.536 "zcopy": false, 00:10:29.536 "get_zone_info": false, 00:10:29.536 "zone_management": false, 00:10:29.536 "zone_append": false, 00:10:29.536 "compare": false, 00:10:29.536 "compare_and_write": false, 00:10:29.536 "abort": false, 00:10:29.536 "seek_hole": false, 00:10:29.536 "seek_data": false, 00:10:29.536 "copy": false, 00:10:29.536 "nvme_iov_md": false 00:10:29.536 }, 00:10:29.536 "memory_domains": [ 00:10:29.536 { 00:10:29.536 "dma_device_id": "system", 00:10:29.536 "dma_device_type": 1 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.536 "dma_device_type": 2 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "dma_device_id": "system", 00:10:29.536 "dma_device_type": 1 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.536 "dma_device_type": 2 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "dma_device_id": "system", 00:10:29.536 "dma_device_type": 1 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.536 "dma_device_type": 2 00:10:29.536 } 00:10:29.536 ], 00:10:29.536 "driver_specific": { 00:10:29.536 "raid": { 00:10:29.536 "uuid": "a844c6ad-9cdf-428c-8868-3d3e29e60e39", 00:10:29.536 "strip_size_kb": 0, 00:10:29.536 "state": "online", 00:10:29.536 "raid_level": "raid1", 00:10:29.536 "superblock": false, 00:10:29.536 "num_base_bdevs": 3, 00:10:29.536 "num_base_bdevs_discovered": 3, 00:10:29.536 "num_base_bdevs_operational": 3, 00:10:29.536 "base_bdevs_list": [ 00:10:29.536 { 00:10:29.536 "name": "NewBaseBdev", 00:10:29.536 "uuid": "e9286de7-a793-48e9-9bd2-8f151c4e0b3f", 00:10:29.536 "is_configured": true, 00:10:29.536 "data_offset": 0, 00:10:29.536 "data_size": 65536 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "name": "BaseBdev2", 00:10:29.536 "uuid": "de6b76f9-4c71-43d2-91ec-a08ea15f80c5", 00:10:29.536 "is_configured": true, 00:10:29.536 "data_offset": 0, 00:10:29.536 "data_size": 65536 00:10:29.536 }, 00:10:29.536 { 00:10:29.536 "name": "BaseBdev3", 00:10:29.536 "uuid": "e07ca83e-6132-4e4d-920e-9e64e89f0746", 00:10:29.536 "is_configured": true, 00:10:29.536 "data_offset": 0, 00:10:29.536 "data_size": 65536 00:10:29.536 } 00:10:29.536 ] 00:10:29.536 } 00:10:29.536 } 00:10:29.536 }' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:29.536 BaseBdev2 00:10:29.536 BaseBdev3' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 11:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.536 [2024-11-04 11:42:55.024454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.536 [2024-11-04 11:42:55.024497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.536 [2024-11-04 11:42:55.024591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.536 [2024-11-04 11:42:55.024941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.536 [2024-11-04 11:42:55.024964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67626 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67626 ']' 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67626 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.536 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67626 00:10:29.800 killing process with pid 67626 00:10:29.800 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:29.800 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:29.800 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67626' 00:10:29.800 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67626 00:10:29.800 [2024-11-04 11:42:55.059664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.800 11:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67626 00:10:30.058 [2024-11-04 11:42:55.385455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.437 ************************************ 00:10:31.437 END TEST raid_state_function_test 00:10:31.437 ************************************ 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:31.437 00:10:31.437 real 0m10.729s 00:10:31.437 user 0m16.939s 00:10:31.437 sys 0m1.840s 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.437 11:42:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:31.437 11:42:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:31.437 11:42:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.437 11:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.437 ************************************ 00:10:31.437 START TEST raid_state_function_test_sb 00:10:31.437 ************************************ 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68247 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68247' 00:10:31.437 Process raid pid: 68247 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68247 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68247 ']' 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.437 11:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.437 [2024-11-04 11:42:56.769520] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:31.437 [2024-11-04 11:42:56.769660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.437 [2024-11-04 11:42:56.927495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.696 [2024-11-04 11:42:57.047893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.955 [2024-11-04 11:42:57.262175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.955 [2024-11-04 11:42:57.262223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.213 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.213 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:32.213 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:32.213 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.213 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 [2024-11-04 11:42:57.623122] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.213 [2024-11-04 11:42:57.623184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.213 [2024-11-04 11:42:57.623196] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.213 [2024-11-04 11:42:57.623206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.213 [2024-11-04 11:42:57.623213] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.213 [2024-11-04 11:42:57.623221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.214 "name": "Existed_Raid", 00:10:32.214 "uuid": "730bfa30-df99-4104-99b8-ecbfabbd2f9b", 00:10:32.214 "strip_size_kb": 0, 00:10:32.214 "state": "configuring", 00:10:32.214 "raid_level": "raid1", 00:10:32.214 "superblock": true, 00:10:32.214 "num_base_bdevs": 3, 00:10:32.214 "num_base_bdevs_discovered": 0, 00:10:32.214 "num_base_bdevs_operational": 3, 00:10:32.214 "base_bdevs_list": [ 00:10:32.214 { 00:10:32.214 "name": "BaseBdev1", 00:10:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.214 "is_configured": false, 00:10:32.214 "data_offset": 0, 00:10:32.214 "data_size": 0 00:10:32.214 }, 00:10:32.214 { 00:10:32.214 "name": "BaseBdev2", 00:10:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.214 "is_configured": false, 00:10:32.214 "data_offset": 0, 00:10:32.214 "data_size": 0 00:10:32.214 }, 00:10:32.214 { 00:10:32.214 "name": "BaseBdev3", 00:10:32.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.214 "is_configured": false, 00:10:32.214 "data_offset": 0, 00:10:32.214 "data_size": 0 00:10:32.214 } 00:10:32.214 ] 00:10:32.214 }' 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.214 11:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 [2024-11-04 11:42:58.054330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.783 [2024-11-04 11:42:58.054373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 [2024-11-04 11:42:58.066306] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.783 [2024-11-04 11:42:58.066357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.783 [2024-11-04 11:42:58.066367] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.783 [2024-11-04 11:42:58.066377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.783 [2024-11-04 11:42:58.066384] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.783 [2024-11-04 11:42:58.066405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 [2024-11-04 11:42:58.117156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.783 BaseBdev1 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 [ 00:10:32.783 { 00:10:32.783 "name": "BaseBdev1", 00:10:32.783 "aliases": [ 00:10:32.783 "d6a9ff34-d7e1-482b-965d-9554556e5f35" 00:10:32.783 ], 00:10:32.783 "product_name": "Malloc disk", 00:10:32.783 "block_size": 512, 00:10:32.783 "num_blocks": 65536, 00:10:32.783 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:32.783 "assigned_rate_limits": { 00:10:32.783 "rw_ios_per_sec": 0, 00:10:32.783 "rw_mbytes_per_sec": 0, 00:10:32.783 "r_mbytes_per_sec": 0, 00:10:32.783 "w_mbytes_per_sec": 0 00:10:32.783 }, 00:10:32.783 "claimed": true, 00:10:32.783 "claim_type": "exclusive_write", 00:10:32.783 "zoned": false, 00:10:32.783 "supported_io_types": { 00:10:32.783 "read": true, 00:10:32.783 "write": true, 00:10:32.783 "unmap": true, 00:10:32.783 "flush": true, 00:10:32.783 "reset": true, 00:10:32.783 "nvme_admin": false, 00:10:32.783 "nvme_io": false, 00:10:32.783 "nvme_io_md": false, 00:10:32.783 "write_zeroes": true, 00:10:32.783 "zcopy": true, 00:10:32.783 "get_zone_info": false, 00:10:32.783 "zone_management": false, 00:10:32.783 "zone_append": false, 00:10:32.783 "compare": false, 00:10:32.783 "compare_and_write": false, 00:10:32.783 "abort": true, 00:10:32.783 "seek_hole": false, 00:10:32.783 "seek_data": false, 00:10:32.783 "copy": true, 00:10:32.783 "nvme_iov_md": false 00:10:32.783 }, 00:10:32.783 "memory_domains": [ 00:10:32.783 { 00:10:32.783 "dma_device_id": "system", 00:10:32.783 "dma_device_type": 1 00:10:32.783 }, 00:10:32.783 { 00:10:32.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.783 "dma_device_type": 2 00:10:32.783 } 00:10:32.783 ], 00:10:32.783 "driver_specific": {} 00:10:32.783 } 00:10:32.783 ] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.783 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.783 "name": "Existed_Raid", 00:10:32.783 "uuid": "d38038c8-28c4-4cab-b499-e75ef653837e", 00:10:32.783 "strip_size_kb": 0, 00:10:32.783 "state": "configuring", 00:10:32.783 "raid_level": "raid1", 00:10:32.783 "superblock": true, 00:10:32.783 "num_base_bdevs": 3, 00:10:32.783 "num_base_bdevs_discovered": 1, 00:10:32.783 "num_base_bdevs_operational": 3, 00:10:32.783 "base_bdevs_list": [ 00:10:32.783 { 00:10:32.783 "name": "BaseBdev1", 00:10:32.783 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:32.783 "is_configured": true, 00:10:32.784 "data_offset": 2048, 00:10:32.784 "data_size": 63488 00:10:32.784 }, 00:10:32.784 { 00:10:32.784 "name": "BaseBdev2", 00:10:32.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.784 "is_configured": false, 00:10:32.784 "data_offset": 0, 00:10:32.784 "data_size": 0 00:10:32.784 }, 00:10:32.784 { 00:10:32.784 "name": "BaseBdev3", 00:10:32.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.784 "is_configured": false, 00:10:32.784 "data_offset": 0, 00:10:32.784 "data_size": 0 00:10:32.784 } 00:10:32.784 ] 00:10:32.784 }' 00:10:32.784 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.784 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 [2024-11-04 11:42:58.628363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.419 [2024-11-04 11:42:58.628439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 [2024-11-04 11:42:58.636436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.419 [2024-11-04 11:42:58.638468] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.419 [2024-11-04 11:42:58.638513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.419 [2024-11-04 11:42:58.638523] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.419 [2024-11-04 11:42:58.638532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.419 "name": "Existed_Raid", 00:10:33.419 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:33.419 "strip_size_kb": 0, 00:10:33.419 "state": "configuring", 00:10:33.419 "raid_level": "raid1", 00:10:33.419 "superblock": true, 00:10:33.419 "num_base_bdevs": 3, 00:10:33.419 "num_base_bdevs_discovered": 1, 00:10:33.419 "num_base_bdevs_operational": 3, 00:10:33.419 "base_bdevs_list": [ 00:10:33.419 { 00:10:33.419 "name": "BaseBdev1", 00:10:33.419 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:33.419 "is_configured": true, 00:10:33.419 "data_offset": 2048, 00:10:33.419 "data_size": 63488 00:10:33.419 }, 00:10:33.419 { 00:10:33.419 "name": "BaseBdev2", 00:10:33.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.419 "is_configured": false, 00:10:33.419 "data_offset": 0, 00:10:33.419 "data_size": 0 00:10:33.419 }, 00:10:33.419 { 00:10:33.419 "name": "BaseBdev3", 00:10:33.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.419 "is_configured": false, 00:10:33.419 "data_offset": 0, 00:10:33.419 "data_size": 0 00:10:33.419 } 00:10:33.419 ] 00:10:33.419 }' 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.419 11:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.679 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.679 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.679 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.679 [2024-11-04 11:42:59.066486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.680 BaseBdev2 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 [ 00:10:33.680 { 00:10:33.680 "name": "BaseBdev2", 00:10:33.680 "aliases": [ 00:10:33.680 "cadde868-7f63-4523-b982-15f98a7eb443" 00:10:33.680 ], 00:10:33.680 "product_name": "Malloc disk", 00:10:33.680 "block_size": 512, 00:10:33.680 "num_blocks": 65536, 00:10:33.680 "uuid": "cadde868-7f63-4523-b982-15f98a7eb443", 00:10:33.680 "assigned_rate_limits": { 00:10:33.680 "rw_ios_per_sec": 0, 00:10:33.680 "rw_mbytes_per_sec": 0, 00:10:33.680 "r_mbytes_per_sec": 0, 00:10:33.680 "w_mbytes_per_sec": 0 00:10:33.680 }, 00:10:33.680 "claimed": true, 00:10:33.680 "claim_type": "exclusive_write", 00:10:33.680 "zoned": false, 00:10:33.680 "supported_io_types": { 00:10:33.680 "read": true, 00:10:33.680 "write": true, 00:10:33.680 "unmap": true, 00:10:33.680 "flush": true, 00:10:33.680 "reset": true, 00:10:33.680 "nvme_admin": false, 00:10:33.680 "nvme_io": false, 00:10:33.680 "nvme_io_md": false, 00:10:33.680 "write_zeroes": true, 00:10:33.680 "zcopy": true, 00:10:33.680 "get_zone_info": false, 00:10:33.680 "zone_management": false, 00:10:33.680 "zone_append": false, 00:10:33.680 "compare": false, 00:10:33.680 "compare_and_write": false, 00:10:33.680 "abort": true, 00:10:33.680 "seek_hole": false, 00:10:33.680 "seek_data": false, 00:10:33.680 "copy": true, 00:10:33.680 "nvme_iov_md": false 00:10:33.680 }, 00:10:33.680 "memory_domains": [ 00:10:33.680 { 00:10:33.680 "dma_device_id": "system", 00:10:33.680 "dma_device_type": 1 00:10:33.680 }, 00:10:33.680 { 00:10:33.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.680 "dma_device_type": 2 00:10:33.680 } 00:10:33.680 ], 00:10:33.680 "driver_specific": {} 00:10:33.680 } 00:10:33.680 ] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.680 "name": "Existed_Raid", 00:10:33.680 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:33.680 "strip_size_kb": 0, 00:10:33.680 "state": "configuring", 00:10:33.680 "raid_level": "raid1", 00:10:33.680 "superblock": true, 00:10:33.680 "num_base_bdevs": 3, 00:10:33.680 "num_base_bdevs_discovered": 2, 00:10:33.680 "num_base_bdevs_operational": 3, 00:10:33.680 "base_bdevs_list": [ 00:10:33.680 { 00:10:33.680 "name": "BaseBdev1", 00:10:33.680 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:33.680 "is_configured": true, 00:10:33.680 "data_offset": 2048, 00:10:33.680 "data_size": 63488 00:10:33.680 }, 00:10:33.680 { 00:10:33.680 "name": "BaseBdev2", 00:10:33.680 "uuid": "cadde868-7f63-4523-b982-15f98a7eb443", 00:10:33.680 "is_configured": true, 00:10:33.680 "data_offset": 2048, 00:10:33.680 "data_size": 63488 00:10:33.680 }, 00:10:33.680 { 00:10:33.680 "name": "BaseBdev3", 00:10:33.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.680 "is_configured": false, 00:10:33.680 "data_offset": 0, 00:10:33.680 "data_size": 0 00:10:33.680 } 00:10:33.680 ] 00:10:33.680 }' 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.680 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.254 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.254 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.254 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.254 [2024-11-04 11:42:59.573307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.254 [2024-11-04 11:42:59.573763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.254 [2024-11-04 11:42:59.573825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.254 [2024-11-04 11:42:59.574153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:34.254 BaseBdev3 00:10:34.254 [2024-11-04 11:42:59.574375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.254 [2024-11-04 11:42:59.574431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.255 [2024-11-04 11:42:59.574652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.255 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.255 [ 00:10:34.255 { 00:10:34.255 "name": "BaseBdev3", 00:10:34.255 "aliases": [ 00:10:34.255 "c7e5452d-72e3-4a93-8ef1-faf26add59bd" 00:10:34.255 ], 00:10:34.255 "product_name": "Malloc disk", 00:10:34.255 "block_size": 512, 00:10:34.255 "num_blocks": 65536, 00:10:34.255 "uuid": "c7e5452d-72e3-4a93-8ef1-faf26add59bd", 00:10:34.255 "assigned_rate_limits": { 00:10:34.255 "rw_ios_per_sec": 0, 00:10:34.255 "rw_mbytes_per_sec": 0, 00:10:34.255 "r_mbytes_per_sec": 0, 00:10:34.255 "w_mbytes_per_sec": 0 00:10:34.255 }, 00:10:34.255 "claimed": true, 00:10:34.255 "claim_type": "exclusive_write", 00:10:34.255 "zoned": false, 00:10:34.256 "supported_io_types": { 00:10:34.256 "read": true, 00:10:34.256 "write": true, 00:10:34.256 "unmap": true, 00:10:34.256 "flush": true, 00:10:34.256 "reset": true, 00:10:34.256 "nvme_admin": false, 00:10:34.256 "nvme_io": false, 00:10:34.256 "nvme_io_md": false, 00:10:34.256 "write_zeroes": true, 00:10:34.256 "zcopy": true, 00:10:34.256 "get_zone_info": false, 00:10:34.256 "zone_management": false, 00:10:34.256 "zone_append": false, 00:10:34.256 "compare": false, 00:10:34.256 "compare_and_write": false, 00:10:34.256 "abort": true, 00:10:34.256 "seek_hole": false, 00:10:34.256 "seek_data": false, 00:10:34.256 "copy": true, 00:10:34.256 "nvme_iov_md": false 00:10:34.256 }, 00:10:34.256 "memory_domains": [ 00:10:34.256 { 00:10:34.256 "dma_device_id": "system", 00:10:34.256 "dma_device_type": 1 00:10:34.256 }, 00:10:34.256 { 00:10:34.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.256 "dma_device_type": 2 00:10:34.256 } 00:10:34.256 ], 00:10:34.256 "driver_specific": {} 00:10:34.256 } 00:10:34.256 ] 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.256 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.257 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.257 "name": "Existed_Raid", 00:10:34.257 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:34.257 "strip_size_kb": 0, 00:10:34.257 "state": "online", 00:10:34.257 "raid_level": "raid1", 00:10:34.257 "superblock": true, 00:10:34.257 "num_base_bdevs": 3, 00:10:34.257 "num_base_bdevs_discovered": 3, 00:10:34.257 "num_base_bdevs_operational": 3, 00:10:34.257 "base_bdevs_list": [ 00:10:34.257 { 00:10:34.257 "name": "BaseBdev1", 00:10:34.257 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:34.257 "is_configured": true, 00:10:34.257 "data_offset": 2048, 00:10:34.257 "data_size": 63488 00:10:34.257 }, 00:10:34.257 { 00:10:34.257 "name": "BaseBdev2", 00:10:34.257 "uuid": "cadde868-7f63-4523-b982-15f98a7eb443", 00:10:34.257 "is_configured": true, 00:10:34.257 "data_offset": 2048, 00:10:34.257 "data_size": 63488 00:10:34.257 }, 00:10:34.257 { 00:10:34.257 "name": "BaseBdev3", 00:10:34.257 "uuid": "c7e5452d-72e3-4a93-8ef1-faf26add59bd", 00:10:34.257 "is_configured": true, 00:10:34.258 "data_offset": 2048, 00:10:34.258 "data_size": 63488 00:10:34.258 } 00:10:34.258 ] 00:10:34.258 }' 00:10:34.258 11:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.258 11:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.518 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 [2024-11-04 11:43:00.020962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.778 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.778 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.778 "name": "Existed_Raid", 00:10:34.778 "aliases": [ 00:10:34.778 "b4a8b801-dab7-47a3-8caf-d655016f65ea" 00:10:34.778 ], 00:10:34.778 "product_name": "Raid Volume", 00:10:34.778 "block_size": 512, 00:10:34.778 "num_blocks": 63488, 00:10:34.778 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:34.778 "assigned_rate_limits": { 00:10:34.778 "rw_ios_per_sec": 0, 00:10:34.779 "rw_mbytes_per_sec": 0, 00:10:34.779 "r_mbytes_per_sec": 0, 00:10:34.779 "w_mbytes_per_sec": 0 00:10:34.779 }, 00:10:34.779 "claimed": false, 00:10:34.779 "zoned": false, 00:10:34.779 "supported_io_types": { 00:10:34.779 "read": true, 00:10:34.779 "write": true, 00:10:34.779 "unmap": false, 00:10:34.779 "flush": false, 00:10:34.779 "reset": true, 00:10:34.779 "nvme_admin": false, 00:10:34.779 "nvme_io": false, 00:10:34.779 "nvme_io_md": false, 00:10:34.779 "write_zeroes": true, 00:10:34.779 "zcopy": false, 00:10:34.779 "get_zone_info": false, 00:10:34.779 "zone_management": false, 00:10:34.779 "zone_append": false, 00:10:34.779 "compare": false, 00:10:34.779 "compare_and_write": false, 00:10:34.779 "abort": false, 00:10:34.779 "seek_hole": false, 00:10:34.779 "seek_data": false, 00:10:34.779 "copy": false, 00:10:34.779 "nvme_iov_md": false 00:10:34.779 }, 00:10:34.779 "memory_domains": [ 00:10:34.779 { 00:10:34.779 "dma_device_id": "system", 00:10:34.779 "dma_device_type": 1 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.779 "dma_device_type": 2 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "dma_device_id": "system", 00:10:34.779 "dma_device_type": 1 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.779 "dma_device_type": 2 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "dma_device_id": "system", 00:10:34.779 "dma_device_type": 1 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.779 "dma_device_type": 2 00:10:34.779 } 00:10:34.779 ], 00:10:34.779 "driver_specific": { 00:10:34.779 "raid": { 00:10:34.779 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:34.779 "strip_size_kb": 0, 00:10:34.779 "state": "online", 00:10:34.779 "raid_level": "raid1", 00:10:34.779 "superblock": true, 00:10:34.779 "num_base_bdevs": 3, 00:10:34.779 "num_base_bdevs_discovered": 3, 00:10:34.779 "num_base_bdevs_operational": 3, 00:10:34.779 "base_bdevs_list": [ 00:10:34.779 { 00:10:34.779 "name": "BaseBdev1", 00:10:34.779 "uuid": "d6a9ff34-d7e1-482b-965d-9554556e5f35", 00:10:34.779 "is_configured": true, 00:10:34.779 "data_offset": 2048, 00:10:34.779 "data_size": 63488 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "name": "BaseBdev2", 00:10:34.779 "uuid": "cadde868-7f63-4523-b982-15f98a7eb443", 00:10:34.779 "is_configured": true, 00:10:34.779 "data_offset": 2048, 00:10:34.779 "data_size": 63488 00:10:34.779 }, 00:10:34.779 { 00:10:34.779 "name": "BaseBdev3", 00:10:34.779 "uuid": "c7e5452d-72e3-4a93-8ef1-faf26add59bd", 00:10:34.779 "is_configured": true, 00:10:34.779 "data_offset": 2048, 00:10:34.779 "data_size": 63488 00:10:34.779 } 00:10:34.779 ] 00:10:34.779 } 00:10:34.779 } 00:10:34.779 }' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.779 BaseBdev2 00:10:34.779 BaseBdev3' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.779 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.779 [2024-11-04 11:43:00.296313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.039 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.040 "name": "Existed_Raid", 00:10:35.040 "uuid": "b4a8b801-dab7-47a3-8caf-d655016f65ea", 00:10:35.040 "strip_size_kb": 0, 00:10:35.040 "state": "online", 00:10:35.040 "raid_level": "raid1", 00:10:35.040 "superblock": true, 00:10:35.040 "num_base_bdevs": 3, 00:10:35.040 "num_base_bdevs_discovered": 2, 00:10:35.040 "num_base_bdevs_operational": 2, 00:10:35.040 "base_bdevs_list": [ 00:10:35.040 { 00:10:35.040 "name": null, 00:10:35.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.040 "is_configured": false, 00:10:35.040 "data_offset": 0, 00:10:35.040 "data_size": 63488 00:10:35.040 }, 00:10:35.040 { 00:10:35.040 "name": "BaseBdev2", 00:10:35.040 "uuid": "cadde868-7f63-4523-b982-15f98a7eb443", 00:10:35.040 "is_configured": true, 00:10:35.040 "data_offset": 2048, 00:10:35.040 "data_size": 63488 00:10:35.040 }, 00:10:35.040 { 00:10:35.040 "name": "BaseBdev3", 00:10:35.040 "uuid": "c7e5452d-72e3-4a93-8ef1-faf26add59bd", 00:10:35.040 "is_configured": true, 00:10:35.040 "data_offset": 2048, 00:10:35.040 "data_size": 63488 00:10:35.040 } 00:10:35.040 ] 00:10:35.040 }' 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.040 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.299 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.558 [2024-11-04 11:43:00.867020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.558 11:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.558 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.558 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.558 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.558 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.558 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.558 [2024-11-04 11:43:01.022217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.558 [2024-11-04 11:43:01.022425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.818 [2024-11-04 11:43:01.122595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.818 [2024-11-04 11:43:01.122741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.818 [2024-11-04 11:43:01.122785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.818 BaseBdev2 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.818 [ 00:10:35.818 { 00:10:35.818 "name": "BaseBdev2", 00:10:35.818 "aliases": [ 00:10:35.818 "90e16a1d-c276-44e8-a22f-5bad6e721cda" 00:10:35.818 ], 00:10:35.818 "product_name": "Malloc disk", 00:10:35.818 "block_size": 512, 00:10:35.818 "num_blocks": 65536, 00:10:35.818 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:35.818 "assigned_rate_limits": { 00:10:35.818 "rw_ios_per_sec": 0, 00:10:35.818 "rw_mbytes_per_sec": 0, 00:10:35.818 "r_mbytes_per_sec": 0, 00:10:35.818 "w_mbytes_per_sec": 0 00:10:35.818 }, 00:10:35.818 "claimed": false, 00:10:35.818 "zoned": false, 00:10:35.818 "supported_io_types": { 00:10:35.818 "read": true, 00:10:35.818 "write": true, 00:10:35.818 "unmap": true, 00:10:35.818 "flush": true, 00:10:35.818 "reset": true, 00:10:35.818 "nvme_admin": false, 00:10:35.818 "nvme_io": false, 00:10:35.818 "nvme_io_md": false, 00:10:35.818 "write_zeroes": true, 00:10:35.818 "zcopy": true, 00:10:35.818 "get_zone_info": false, 00:10:35.818 "zone_management": false, 00:10:35.818 "zone_append": false, 00:10:35.818 "compare": false, 00:10:35.818 "compare_and_write": false, 00:10:35.818 "abort": true, 00:10:35.818 "seek_hole": false, 00:10:35.818 "seek_data": false, 00:10:35.818 "copy": true, 00:10:35.818 "nvme_iov_md": false 00:10:35.818 }, 00:10:35.818 "memory_domains": [ 00:10:35.818 { 00:10:35.818 "dma_device_id": "system", 00:10:35.818 "dma_device_type": 1 00:10:35.818 }, 00:10:35.818 { 00:10:35.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.818 "dma_device_type": 2 00:10:35.818 } 00:10:35.818 ], 00:10:35.818 "driver_specific": {} 00:10:35.818 } 00:10:35.818 ] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.818 BaseBdev3 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:35.818 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.819 [ 00:10:35.819 { 00:10:35.819 "name": "BaseBdev3", 00:10:35.819 "aliases": [ 00:10:35.819 "3c096d3b-dae6-41ac-a535-070b9b639675" 00:10:35.819 ], 00:10:35.819 "product_name": "Malloc disk", 00:10:35.819 "block_size": 512, 00:10:35.819 "num_blocks": 65536, 00:10:35.819 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:35.819 "assigned_rate_limits": { 00:10:35.819 "rw_ios_per_sec": 0, 00:10:35.819 "rw_mbytes_per_sec": 0, 00:10:35.819 "r_mbytes_per_sec": 0, 00:10:35.819 "w_mbytes_per_sec": 0 00:10:35.819 }, 00:10:35.819 "claimed": false, 00:10:35.819 "zoned": false, 00:10:35.819 "supported_io_types": { 00:10:35.819 "read": true, 00:10:35.819 "write": true, 00:10:35.819 "unmap": true, 00:10:35.819 "flush": true, 00:10:35.819 "reset": true, 00:10:35.819 "nvme_admin": false, 00:10:35.819 "nvme_io": false, 00:10:35.819 "nvme_io_md": false, 00:10:35.819 "write_zeroes": true, 00:10:35.819 "zcopy": true, 00:10:35.819 "get_zone_info": false, 00:10:35.819 "zone_management": false, 00:10:35.819 "zone_append": false, 00:10:35.819 "compare": false, 00:10:35.819 "compare_and_write": false, 00:10:35.819 "abort": true, 00:10:35.819 "seek_hole": false, 00:10:35.819 "seek_data": false, 00:10:35.819 "copy": true, 00:10:35.819 "nvme_iov_md": false 00:10:35.819 }, 00:10:35.819 "memory_domains": [ 00:10:35.819 { 00:10:35.819 "dma_device_id": "system", 00:10:35.819 "dma_device_type": 1 00:10:35.819 }, 00:10:35.819 { 00:10:35.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.819 "dma_device_type": 2 00:10:35.819 } 00:10:35.819 ], 00:10:35.819 "driver_specific": {} 00:10:35.819 } 00:10:35.819 ] 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.819 [2024-11-04 11:43:01.332711] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.819 [2024-11-04 11:43:01.332828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.819 [2024-11-04 11:43:01.332889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.819 [2024-11-04 11:43:01.334866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.819 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.078 "name": "Existed_Raid", 00:10:36.078 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:36.078 "strip_size_kb": 0, 00:10:36.078 "state": "configuring", 00:10:36.078 "raid_level": "raid1", 00:10:36.078 "superblock": true, 00:10:36.078 "num_base_bdevs": 3, 00:10:36.078 "num_base_bdevs_discovered": 2, 00:10:36.078 "num_base_bdevs_operational": 3, 00:10:36.078 "base_bdevs_list": [ 00:10:36.078 { 00:10:36.078 "name": "BaseBdev1", 00:10:36.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.078 "is_configured": false, 00:10:36.078 "data_offset": 0, 00:10:36.078 "data_size": 0 00:10:36.078 }, 00:10:36.078 { 00:10:36.078 "name": "BaseBdev2", 00:10:36.078 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:36.078 "is_configured": true, 00:10:36.078 "data_offset": 2048, 00:10:36.078 "data_size": 63488 00:10:36.078 }, 00:10:36.078 { 00:10:36.078 "name": "BaseBdev3", 00:10:36.078 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:36.078 "is_configured": true, 00:10:36.078 "data_offset": 2048, 00:10:36.078 "data_size": 63488 00:10:36.078 } 00:10:36.078 ] 00:10:36.078 }' 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.078 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 [2024-11-04 11:43:01.764007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.338 "name": "Existed_Raid", 00:10:36.338 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:36.338 "strip_size_kb": 0, 00:10:36.338 "state": "configuring", 00:10:36.338 "raid_level": "raid1", 00:10:36.338 "superblock": true, 00:10:36.338 "num_base_bdevs": 3, 00:10:36.338 "num_base_bdevs_discovered": 1, 00:10:36.338 "num_base_bdevs_operational": 3, 00:10:36.338 "base_bdevs_list": [ 00:10:36.338 { 00:10:36.338 "name": "BaseBdev1", 00:10:36.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.338 "is_configured": false, 00:10:36.338 "data_offset": 0, 00:10:36.338 "data_size": 0 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "name": null, 00:10:36.338 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:36.338 "is_configured": false, 00:10:36.338 "data_offset": 0, 00:10:36.338 "data_size": 63488 00:10:36.338 }, 00:10:36.338 { 00:10:36.338 "name": "BaseBdev3", 00:10:36.338 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:36.338 "is_configured": true, 00:10:36.338 "data_offset": 2048, 00:10:36.338 "data_size": 63488 00:10:36.338 } 00:10:36.338 ] 00:10:36.338 }' 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.338 11:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 [2024-11-04 11:43:02.316555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.909 BaseBdev1 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 [ 00:10:36.909 { 00:10:36.909 "name": "BaseBdev1", 00:10:36.909 "aliases": [ 00:10:36.909 "ce9ab264-4db5-4636-bf15-0c757989056d" 00:10:36.909 ], 00:10:36.909 "product_name": "Malloc disk", 00:10:36.909 "block_size": 512, 00:10:36.909 "num_blocks": 65536, 00:10:36.909 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:36.909 "assigned_rate_limits": { 00:10:36.909 "rw_ios_per_sec": 0, 00:10:36.909 "rw_mbytes_per_sec": 0, 00:10:36.909 "r_mbytes_per_sec": 0, 00:10:36.909 "w_mbytes_per_sec": 0 00:10:36.909 }, 00:10:36.909 "claimed": true, 00:10:36.909 "claim_type": "exclusive_write", 00:10:36.909 "zoned": false, 00:10:36.909 "supported_io_types": { 00:10:36.909 "read": true, 00:10:36.909 "write": true, 00:10:36.909 "unmap": true, 00:10:36.909 "flush": true, 00:10:36.909 "reset": true, 00:10:36.909 "nvme_admin": false, 00:10:36.909 "nvme_io": false, 00:10:36.909 "nvme_io_md": false, 00:10:36.909 "write_zeroes": true, 00:10:36.909 "zcopy": true, 00:10:36.909 "get_zone_info": false, 00:10:36.909 "zone_management": false, 00:10:36.909 "zone_append": false, 00:10:36.909 "compare": false, 00:10:36.909 "compare_and_write": false, 00:10:36.909 "abort": true, 00:10:36.909 "seek_hole": false, 00:10:36.909 "seek_data": false, 00:10:36.909 "copy": true, 00:10:36.909 "nvme_iov_md": false 00:10:36.909 }, 00:10:36.909 "memory_domains": [ 00:10:36.909 { 00:10:36.909 "dma_device_id": "system", 00:10:36.909 "dma_device_type": 1 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.909 "dma_device_type": 2 00:10:36.909 } 00:10:36.909 ], 00:10:36.909 "driver_specific": {} 00:10:36.909 } 00:10:36.909 ] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.909 "name": "Existed_Raid", 00:10:36.909 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:36.909 "strip_size_kb": 0, 00:10:36.909 "state": "configuring", 00:10:36.909 "raid_level": "raid1", 00:10:36.909 "superblock": true, 00:10:36.909 "num_base_bdevs": 3, 00:10:36.909 "num_base_bdevs_discovered": 2, 00:10:36.909 "num_base_bdevs_operational": 3, 00:10:36.909 "base_bdevs_list": [ 00:10:36.909 { 00:10:36.909 "name": "BaseBdev1", 00:10:36.909 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:36.909 "is_configured": true, 00:10:36.909 "data_offset": 2048, 00:10:36.909 "data_size": 63488 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "name": null, 00:10:36.909 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:36.909 "is_configured": false, 00:10:36.909 "data_offset": 0, 00:10:36.909 "data_size": 63488 00:10:36.909 }, 00:10:36.909 { 00:10:36.909 "name": "BaseBdev3", 00:10:36.909 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:36.909 "is_configured": true, 00:10:36.909 "data_offset": 2048, 00:10:36.909 "data_size": 63488 00:10:36.909 } 00:10:36.909 ] 00:10:36.909 }' 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.909 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.478 [2024-11-04 11:43:02.823788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.478 "name": "Existed_Raid", 00:10:37.478 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:37.478 "strip_size_kb": 0, 00:10:37.478 "state": "configuring", 00:10:37.478 "raid_level": "raid1", 00:10:37.478 "superblock": true, 00:10:37.478 "num_base_bdevs": 3, 00:10:37.478 "num_base_bdevs_discovered": 1, 00:10:37.478 "num_base_bdevs_operational": 3, 00:10:37.478 "base_bdevs_list": [ 00:10:37.478 { 00:10:37.478 "name": "BaseBdev1", 00:10:37.478 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:37.478 "is_configured": true, 00:10:37.478 "data_offset": 2048, 00:10:37.478 "data_size": 63488 00:10:37.478 }, 00:10:37.478 { 00:10:37.478 "name": null, 00:10:37.478 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:37.478 "is_configured": false, 00:10:37.478 "data_offset": 0, 00:10:37.478 "data_size": 63488 00:10:37.478 }, 00:10:37.478 { 00:10:37.478 "name": null, 00:10:37.478 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:37.478 "is_configured": false, 00:10:37.478 "data_offset": 0, 00:10:37.478 "data_size": 63488 00:10:37.478 } 00:10:37.478 ] 00:10:37.478 }' 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.478 11:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 [2024-11-04 11:43:03.346932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.059 "name": "Existed_Raid", 00:10:38.059 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:38.059 "strip_size_kb": 0, 00:10:38.059 "state": "configuring", 00:10:38.059 "raid_level": "raid1", 00:10:38.059 "superblock": true, 00:10:38.059 "num_base_bdevs": 3, 00:10:38.059 "num_base_bdevs_discovered": 2, 00:10:38.059 "num_base_bdevs_operational": 3, 00:10:38.059 "base_bdevs_list": [ 00:10:38.059 { 00:10:38.059 "name": "BaseBdev1", 00:10:38.059 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:38.059 "is_configured": true, 00:10:38.059 "data_offset": 2048, 00:10:38.059 "data_size": 63488 00:10:38.059 }, 00:10:38.059 { 00:10:38.059 "name": null, 00:10:38.059 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:38.059 "is_configured": false, 00:10:38.059 "data_offset": 0, 00:10:38.059 "data_size": 63488 00:10:38.059 }, 00:10:38.059 { 00:10:38.059 "name": "BaseBdev3", 00:10:38.059 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:38.059 "is_configured": true, 00:10:38.059 "data_offset": 2048, 00:10:38.059 "data_size": 63488 00:10:38.059 } 00:10:38.059 ] 00:10:38.059 }' 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.059 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.628 11:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 [2024-11-04 11:43:03.906011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.629 "name": "Existed_Raid", 00:10:38.629 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:38.629 "strip_size_kb": 0, 00:10:38.629 "state": "configuring", 00:10:38.629 "raid_level": "raid1", 00:10:38.629 "superblock": true, 00:10:38.629 "num_base_bdevs": 3, 00:10:38.629 "num_base_bdevs_discovered": 1, 00:10:38.629 "num_base_bdevs_operational": 3, 00:10:38.629 "base_bdevs_list": [ 00:10:38.629 { 00:10:38.629 "name": null, 00:10:38.629 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:38.629 "is_configured": false, 00:10:38.629 "data_offset": 0, 00:10:38.629 "data_size": 63488 00:10:38.629 }, 00:10:38.629 { 00:10:38.629 "name": null, 00:10:38.629 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:38.629 "is_configured": false, 00:10:38.629 "data_offset": 0, 00:10:38.629 "data_size": 63488 00:10:38.629 }, 00:10:38.629 { 00:10:38.629 "name": "BaseBdev3", 00:10:38.629 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:38.629 "is_configured": true, 00:10:38.629 "data_offset": 2048, 00:10:38.629 "data_size": 63488 00:10:38.629 } 00:10:38.629 ] 00:10:38.629 }' 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.629 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.198 [2024-11-04 11:43:04.449206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.198 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.199 "name": "Existed_Raid", 00:10:39.199 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:39.199 "strip_size_kb": 0, 00:10:39.199 "state": "configuring", 00:10:39.199 "raid_level": "raid1", 00:10:39.199 "superblock": true, 00:10:39.199 "num_base_bdevs": 3, 00:10:39.199 "num_base_bdevs_discovered": 2, 00:10:39.199 "num_base_bdevs_operational": 3, 00:10:39.199 "base_bdevs_list": [ 00:10:39.199 { 00:10:39.199 "name": null, 00:10:39.199 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:39.199 "is_configured": false, 00:10:39.199 "data_offset": 0, 00:10:39.199 "data_size": 63488 00:10:39.199 }, 00:10:39.199 { 00:10:39.199 "name": "BaseBdev2", 00:10:39.199 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:39.199 "is_configured": true, 00:10:39.199 "data_offset": 2048, 00:10:39.199 "data_size": 63488 00:10:39.199 }, 00:10:39.199 { 00:10:39.199 "name": "BaseBdev3", 00:10:39.199 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:39.199 "is_configured": true, 00:10:39.199 "data_offset": 2048, 00:10:39.199 "data_size": 63488 00:10:39.199 } 00:10:39.199 ] 00:10:39.199 }' 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.199 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 11:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.719 11:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce9ab264-4db5-4636-bf15-0c757989056d 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-11-04 11:43:05.067976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.719 [2024-11-04 11:43:05.068355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.719 [2024-11-04 11:43:05.068374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.719 [2024-11-04 11:43:05.068694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:39.719 [2024-11-04 11:43:05.068861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.719 [2024-11-04 11:43:05.068875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:39.719 [2024-11-04 11:43:05.069013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.719 NewBaseBdev 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [ 00:10:39.719 { 00:10:39.719 "name": "NewBaseBdev", 00:10:39.719 "aliases": [ 00:10:39.719 "ce9ab264-4db5-4636-bf15-0c757989056d" 00:10:39.719 ], 00:10:39.719 "product_name": "Malloc disk", 00:10:39.719 "block_size": 512, 00:10:39.719 "num_blocks": 65536, 00:10:39.719 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:39.719 "assigned_rate_limits": { 00:10:39.719 "rw_ios_per_sec": 0, 00:10:39.719 "rw_mbytes_per_sec": 0, 00:10:39.719 "r_mbytes_per_sec": 0, 00:10:39.719 "w_mbytes_per_sec": 0 00:10:39.719 }, 00:10:39.719 "claimed": true, 00:10:39.719 "claim_type": "exclusive_write", 00:10:39.719 "zoned": false, 00:10:39.719 "supported_io_types": { 00:10:39.719 "read": true, 00:10:39.719 "write": true, 00:10:39.719 "unmap": true, 00:10:39.719 "flush": true, 00:10:39.719 "reset": true, 00:10:39.719 "nvme_admin": false, 00:10:39.719 "nvme_io": false, 00:10:39.719 "nvme_io_md": false, 00:10:39.719 "write_zeroes": true, 00:10:39.719 "zcopy": true, 00:10:39.719 "get_zone_info": false, 00:10:39.719 "zone_management": false, 00:10:39.719 "zone_append": false, 00:10:39.719 "compare": false, 00:10:39.719 "compare_and_write": false, 00:10:39.719 "abort": true, 00:10:39.719 "seek_hole": false, 00:10:39.719 "seek_data": false, 00:10:39.719 "copy": true, 00:10:39.719 "nvme_iov_md": false 00:10:39.719 }, 00:10:39.719 "memory_domains": [ 00:10:39.719 { 00:10:39.719 "dma_device_id": "system", 00:10:39.719 "dma_device_type": 1 00:10:39.719 }, 00:10:39.719 { 00:10:39.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.719 "dma_device_type": 2 00:10:39.719 } 00:10:39.719 ], 00:10:39.719 "driver_specific": {} 00:10:39.719 } 00:10:39.719 ] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.719 "name": "Existed_Raid", 00:10:39.719 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:39.719 "strip_size_kb": 0, 00:10:39.719 "state": "online", 00:10:39.719 "raid_level": "raid1", 00:10:39.719 "superblock": true, 00:10:39.719 "num_base_bdevs": 3, 00:10:39.719 "num_base_bdevs_discovered": 3, 00:10:39.719 "num_base_bdevs_operational": 3, 00:10:39.719 "base_bdevs_list": [ 00:10:39.719 { 00:10:39.719 "name": "NewBaseBdev", 00:10:39.719 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:39.719 "is_configured": true, 00:10:39.719 "data_offset": 2048, 00:10:39.719 "data_size": 63488 00:10:39.719 }, 00:10:39.719 { 00:10:39.719 "name": "BaseBdev2", 00:10:39.719 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:39.719 "is_configured": true, 00:10:39.719 "data_offset": 2048, 00:10:39.719 "data_size": 63488 00:10:39.719 }, 00:10:39.719 { 00:10:39.719 "name": "BaseBdev3", 00:10:39.719 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:39.719 "is_configured": true, 00:10:39.719 "data_offset": 2048, 00:10:39.719 "data_size": 63488 00:10:39.719 } 00:10:39.719 ] 00:10:39.719 }' 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.719 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.289 [2024-11-04 11:43:05.551591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.289 "name": "Existed_Raid", 00:10:40.289 "aliases": [ 00:10:40.289 "9aa398e1-b9e4-4bb9-a88d-b30b56d05398" 00:10:40.289 ], 00:10:40.289 "product_name": "Raid Volume", 00:10:40.289 "block_size": 512, 00:10:40.289 "num_blocks": 63488, 00:10:40.289 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:40.289 "assigned_rate_limits": { 00:10:40.289 "rw_ios_per_sec": 0, 00:10:40.289 "rw_mbytes_per_sec": 0, 00:10:40.289 "r_mbytes_per_sec": 0, 00:10:40.289 "w_mbytes_per_sec": 0 00:10:40.289 }, 00:10:40.289 "claimed": false, 00:10:40.289 "zoned": false, 00:10:40.289 "supported_io_types": { 00:10:40.289 "read": true, 00:10:40.289 "write": true, 00:10:40.289 "unmap": false, 00:10:40.289 "flush": false, 00:10:40.289 "reset": true, 00:10:40.289 "nvme_admin": false, 00:10:40.289 "nvme_io": false, 00:10:40.289 "nvme_io_md": false, 00:10:40.289 "write_zeroes": true, 00:10:40.289 "zcopy": false, 00:10:40.289 "get_zone_info": false, 00:10:40.289 "zone_management": false, 00:10:40.289 "zone_append": false, 00:10:40.289 "compare": false, 00:10:40.289 "compare_and_write": false, 00:10:40.289 "abort": false, 00:10:40.289 "seek_hole": false, 00:10:40.289 "seek_data": false, 00:10:40.289 "copy": false, 00:10:40.289 "nvme_iov_md": false 00:10:40.289 }, 00:10:40.289 "memory_domains": [ 00:10:40.289 { 00:10:40.289 "dma_device_id": "system", 00:10:40.289 "dma_device_type": 1 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.289 "dma_device_type": 2 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "dma_device_id": "system", 00:10:40.289 "dma_device_type": 1 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.289 "dma_device_type": 2 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "dma_device_id": "system", 00:10:40.289 "dma_device_type": 1 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.289 "dma_device_type": 2 00:10:40.289 } 00:10:40.289 ], 00:10:40.289 "driver_specific": { 00:10:40.289 "raid": { 00:10:40.289 "uuid": "9aa398e1-b9e4-4bb9-a88d-b30b56d05398", 00:10:40.289 "strip_size_kb": 0, 00:10:40.289 "state": "online", 00:10:40.289 "raid_level": "raid1", 00:10:40.289 "superblock": true, 00:10:40.289 "num_base_bdevs": 3, 00:10:40.289 "num_base_bdevs_discovered": 3, 00:10:40.289 "num_base_bdevs_operational": 3, 00:10:40.289 "base_bdevs_list": [ 00:10:40.289 { 00:10:40.289 "name": "NewBaseBdev", 00:10:40.289 "uuid": "ce9ab264-4db5-4636-bf15-0c757989056d", 00:10:40.289 "is_configured": true, 00:10:40.289 "data_offset": 2048, 00:10:40.289 "data_size": 63488 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "name": "BaseBdev2", 00:10:40.289 "uuid": "90e16a1d-c276-44e8-a22f-5bad6e721cda", 00:10:40.289 "is_configured": true, 00:10:40.289 "data_offset": 2048, 00:10:40.289 "data_size": 63488 00:10:40.289 }, 00:10:40.289 { 00:10:40.289 "name": "BaseBdev3", 00:10:40.289 "uuid": "3c096d3b-dae6-41ac-a535-070b9b639675", 00:10:40.289 "is_configured": true, 00:10:40.289 "data_offset": 2048, 00:10:40.289 "data_size": 63488 00:10:40.289 } 00:10:40.289 ] 00:10:40.289 } 00:10:40.289 } 00:10:40.289 }' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:40.289 BaseBdev2 00:10:40.289 BaseBdev3' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.289 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.290 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.290 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.290 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.550 [2024-11-04 11:43:05.850739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.550 [2024-11-04 11:43:05.850843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.550 [2024-11-04 11:43:05.850974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.550 [2024-11-04 11:43:05.851356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.550 [2024-11-04 11:43:05.851446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68247 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68247 ']' 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68247 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68247 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68247' 00:10:40.550 killing process with pid 68247 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68247 00:10:40.550 [2024-11-04 11:43:05.896228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.550 11:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68247 00:10:40.809 [2024-11-04 11:43:06.228464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.189 11:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.189 00:10:42.189 real 0m10.745s 00:10:42.189 user 0m17.065s 00:10:42.189 sys 0m1.847s 00:10:42.189 ************************************ 00:10:42.189 END TEST raid_state_function_test_sb 00:10:42.189 ************************************ 00:10:42.189 11:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.189 11:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 11:43:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:42.189 11:43:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:42.189 11:43:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.189 11:43:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 ************************************ 00:10:42.189 START TEST raid_superblock_test 00:10:42.189 ************************************ 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68873 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68873 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68873 ']' 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.189 11:43:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.189 [2024-11-04 11:43:07.576462] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:42.189 [2024-11-04 11:43:07.576663] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68873 ] 00:10:42.448 [2024-11-04 11:43:07.752432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.448 [2024-11-04 11:43:07.872455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.708 [2024-11-04 11:43:08.090040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.708 [2024-11-04 11:43:08.090080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.968 malloc1 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.968 [2024-11-04 11:43:08.481424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.968 [2024-11-04 11:43:08.481600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.968 [2024-11-04 11:43:08.481666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:42.968 [2024-11-04 11:43:08.481712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.968 [2024-11-04 11:43:08.484285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.968 [2024-11-04 11:43:08.484362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.968 pt1 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:42.968 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 malloc2 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 [2024-11-04 11:43:08.546180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.228 [2024-11-04 11:43:08.546294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.228 [2024-11-04 11:43:08.546324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:43.228 [2024-11-04 11:43:08.546333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.228 [2024-11-04 11:43:08.548782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.228 [2024-11-04 11:43:08.548818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.228 pt2 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.228 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 malloc3 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.229 [2024-11-04 11:43:08.616610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:43.229 [2024-11-04 11:43:08.616744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.229 [2024-11-04 11:43:08.616791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:43.229 [2024-11-04 11:43:08.616841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.229 [2024-11-04 11:43:08.619533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.229 [2024-11-04 11:43:08.619603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:43.229 pt3 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.229 [2024-11-04 11:43:08.628649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.229 [2024-11-04 11:43:08.630770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.229 [2024-11-04 11:43:08.630876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.229 [2024-11-04 11:43:08.631064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:43.229 [2024-11-04 11:43:08.631159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.229 [2024-11-04 11:43:08.631447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:43.229 [2024-11-04 11:43:08.631673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:43.229 [2024-11-04 11:43:08.631719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:43.229 [2024-11-04 11:43:08.631951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.229 "name": "raid_bdev1", 00:10:43.229 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:43.229 "strip_size_kb": 0, 00:10:43.229 "state": "online", 00:10:43.229 "raid_level": "raid1", 00:10:43.229 "superblock": true, 00:10:43.229 "num_base_bdevs": 3, 00:10:43.229 "num_base_bdevs_discovered": 3, 00:10:43.229 "num_base_bdevs_operational": 3, 00:10:43.229 "base_bdevs_list": [ 00:10:43.229 { 00:10:43.229 "name": "pt1", 00:10:43.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.229 "is_configured": true, 00:10:43.229 "data_offset": 2048, 00:10:43.229 "data_size": 63488 00:10:43.229 }, 00:10:43.229 { 00:10:43.229 "name": "pt2", 00:10:43.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.229 "is_configured": true, 00:10:43.229 "data_offset": 2048, 00:10:43.229 "data_size": 63488 00:10:43.229 }, 00:10:43.229 { 00:10:43.229 "name": "pt3", 00:10:43.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.229 "is_configured": true, 00:10:43.229 "data_offset": 2048, 00:10:43.229 "data_size": 63488 00:10:43.229 } 00:10:43.229 ] 00:10:43.229 }' 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.229 11:43:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.797 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.797 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.797 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.797 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.798 [2024-11-04 11:43:09.108387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.798 "name": "raid_bdev1", 00:10:43.798 "aliases": [ 00:10:43.798 "e95bd943-cef1-4799-b245-d0f851d183d5" 00:10:43.798 ], 00:10:43.798 "product_name": "Raid Volume", 00:10:43.798 "block_size": 512, 00:10:43.798 "num_blocks": 63488, 00:10:43.798 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:43.798 "assigned_rate_limits": { 00:10:43.798 "rw_ios_per_sec": 0, 00:10:43.798 "rw_mbytes_per_sec": 0, 00:10:43.798 "r_mbytes_per_sec": 0, 00:10:43.798 "w_mbytes_per_sec": 0 00:10:43.798 }, 00:10:43.798 "claimed": false, 00:10:43.798 "zoned": false, 00:10:43.798 "supported_io_types": { 00:10:43.798 "read": true, 00:10:43.798 "write": true, 00:10:43.798 "unmap": false, 00:10:43.798 "flush": false, 00:10:43.798 "reset": true, 00:10:43.798 "nvme_admin": false, 00:10:43.798 "nvme_io": false, 00:10:43.798 "nvme_io_md": false, 00:10:43.798 "write_zeroes": true, 00:10:43.798 "zcopy": false, 00:10:43.798 "get_zone_info": false, 00:10:43.798 "zone_management": false, 00:10:43.798 "zone_append": false, 00:10:43.798 "compare": false, 00:10:43.798 "compare_and_write": false, 00:10:43.798 "abort": false, 00:10:43.798 "seek_hole": false, 00:10:43.798 "seek_data": false, 00:10:43.798 "copy": false, 00:10:43.798 "nvme_iov_md": false 00:10:43.798 }, 00:10:43.798 "memory_domains": [ 00:10:43.798 { 00:10:43.798 "dma_device_id": "system", 00:10:43.798 "dma_device_type": 1 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.798 "dma_device_type": 2 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "dma_device_id": "system", 00:10:43.798 "dma_device_type": 1 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.798 "dma_device_type": 2 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "dma_device_id": "system", 00:10:43.798 "dma_device_type": 1 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.798 "dma_device_type": 2 00:10:43.798 } 00:10:43.798 ], 00:10:43.798 "driver_specific": { 00:10:43.798 "raid": { 00:10:43.798 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:43.798 "strip_size_kb": 0, 00:10:43.798 "state": "online", 00:10:43.798 "raid_level": "raid1", 00:10:43.798 "superblock": true, 00:10:43.798 "num_base_bdevs": 3, 00:10:43.798 "num_base_bdevs_discovered": 3, 00:10:43.798 "num_base_bdevs_operational": 3, 00:10:43.798 "base_bdevs_list": [ 00:10:43.798 { 00:10:43.798 "name": "pt1", 00:10:43.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.798 "is_configured": true, 00:10:43.798 "data_offset": 2048, 00:10:43.798 "data_size": 63488 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "name": "pt2", 00:10:43.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.798 "is_configured": true, 00:10:43.798 "data_offset": 2048, 00:10:43.798 "data_size": 63488 00:10:43.798 }, 00:10:43.798 { 00:10:43.798 "name": "pt3", 00:10:43.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.798 "is_configured": true, 00:10:43.798 "data_offset": 2048, 00:10:43.798 "data_size": 63488 00:10:43.798 } 00:10:43.798 ] 00:10:43.798 } 00:10:43.798 } 00:10:43.798 }' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.798 pt2 00:10:43.798 pt3' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.798 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.057 [2024-11-04 11:43:09.391794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e95bd943-cef1-4799-b245-d0f851d183d5 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e95bd943-cef1-4799-b245-d0f851d183d5 ']' 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.057 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.057 [2024-11-04 11:43:09.435425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.057 [2024-11-04 11:43:09.435467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.058 [2024-11-04 11:43:09.435568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.058 [2024-11-04 11:43:09.435652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.058 [2024-11-04 11:43:09.435665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.058 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.058 [2024-11-04 11:43:09.571208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:44.058 [2024-11-04 11:43:09.573529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:44.058 [2024-11-04 11:43:09.573586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:44.058 [2024-11-04 11:43:09.573639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:44.058 [2024-11-04 11:43:09.573693] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:44.058 [2024-11-04 11:43:09.573712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:44.058 [2024-11-04 11:43:09.573730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.058 [2024-11-04 11:43:09.573741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:44.322 request: 00:10:44.322 { 00:10:44.322 "name": "raid_bdev1", 00:10:44.322 "raid_level": "raid1", 00:10:44.322 "base_bdevs": [ 00:10:44.322 "malloc1", 00:10:44.322 "malloc2", 00:10:44.322 "malloc3" 00:10:44.322 ], 00:10:44.322 "superblock": false, 00:10:44.322 "method": "bdev_raid_create", 00:10:44.322 "req_id": 1 00:10:44.322 } 00:10:44.322 Got JSON-RPC error response 00:10:44.322 response: 00:10:44.322 { 00:10:44.322 "code": -17, 00:10:44.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:44.322 } 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.322 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.322 [2024-11-04 11:43:09.639021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.322 [2024-11-04 11:43:09.639182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.322 [2024-11-04 11:43:09.639228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.323 [2024-11-04 11:43:09.639258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.323 [2024-11-04 11:43:09.641872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.323 [2024-11-04 11:43:09.641946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.323 [2024-11-04 11:43:09.642055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:44.323 [2024-11-04 11:43:09.642136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.323 pt1 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.323 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.323 "name": "raid_bdev1", 00:10:44.323 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:44.323 "strip_size_kb": 0, 00:10:44.324 "state": "configuring", 00:10:44.324 "raid_level": "raid1", 00:10:44.324 "superblock": true, 00:10:44.324 "num_base_bdevs": 3, 00:10:44.324 "num_base_bdevs_discovered": 1, 00:10:44.324 "num_base_bdevs_operational": 3, 00:10:44.324 "base_bdevs_list": [ 00:10:44.324 { 00:10:44.324 "name": "pt1", 00:10:44.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.324 "is_configured": true, 00:10:44.324 "data_offset": 2048, 00:10:44.324 "data_size": 63488 00:10:44.324 }, 00:10:44.324 { 00:10:44.324 "name": null, 00:10:44.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.324 "is_configured": false, 00:10:44.324 "data_offset": 2048, 00:10:44.324 "data_size": 63488 00:10:44.324 }, 00:10:44.324 { 00:10:44.324 "name": null, 00:10:44.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.324 "is_configured": false, 00:10:44.324 "data_offset": 2048, 00:10:44.324 "data_size": 63488 00:10:44.324 } 00:10:44.324 ] 00:10:44.324 }' 00:10:44.324 11:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.324 11:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 [2024-11-04 11:43:10.014468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.585 [2024-11-04 11:43:10.014562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.585 [2024-11-04 11:43:10.014591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:44.585 [2024-11-04 11:43:10.014602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.585 [2024-11-04 11:43:10.015151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.585 [2024-11-04 11:43:10.015185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.585 [2024-11-04 11:43:10.015295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.585 [2024-11-04 11:43:10.015322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.585 pt2 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 [2024-11-04 11:43:10.026415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.585 "name": "raid_bdev1", 00:10:44.585 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:44.585 "strip_size_kb": 0, 00:10:44.585 "state": "configuring", 00:10:44.585 "raid_level": "raid1", 00:10:44.585 "superblock": true, 00:10:44.585 "num_base_bdevs": 3, 00:10:44.585 "num_base_bdevs_discovered": 1, 00:10:44.585 "num_base_bdevs_operational": 3, 00:10:44.585 "base_bdevs_list": [ 00:10:44.585 { 00:10:44.585 "name": "pt1", 00:10:44.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.585 "is_configured": true, 00:10:44.585 "data_offset": 2048, 00:10:44.585 "data_size": 63488 00:10:44.585 }, 00:10:44.585 { 00:10:44.585 "name": null, 00:10:44.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.585 "is_configured": false, 00:10:44.585 "data_offset": 0, 00:10:44.585 "data_size": 63488 00:10:44.585 }, 00:10:44.585 { 00:10:44.585 "name": null, 00:10:44.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.585 "is_configured": false, 00:10:44.585 "data_offset": 2048, 00:10:44.585 "data_size": 63488 00:10:44.585 } 00:10:44.585 ] 00:10:44.585 }' 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.585 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.153 [2024-11-04 11:43:10.525557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.153 [2024-11-04 11:43:10.525759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.153 [2024-11-04 11:43:10.525809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:45.153 [2024-11-04 11:43:10.525843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.153 [2024-11-04 11:43:10.526457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.153 [2024-11-04 11:43:10.526527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.153 [2024-11-04 11:43:10.526684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.153 [2024-11-04 11:43:10.526776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.153 pt2 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.153 [2024-11-04 11:43:10.537474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:45.153 [2024-11-04 11:43:10.537563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.153 [2024-11-04 11:43:10.537604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.153 [2024-11-04 11:43:10.537641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.153 [2024-11-04 11:43:10.538101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.153 [2024-11-04 11:43:10.538166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:45.153 [2024-11-04 11:43:10.538294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:45.153 [2024-11-04 11:43:10.538349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.153 [2024-11-04 11:43:10.538550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.153 [2024-11-04 11:43:10.538597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.153 [2024-11-04 11:43:10.538921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:45.153 [2024-11-04 11:43:10.539169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.153 [2024-11-04 11:43:10.539210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:45.153 [2024-11-04 11:43:10.539467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.153 pt3 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.153 "name": "raid_bdev1", 00:10:45.153 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:45.153 "strip_size_kb": 0, 00:10:45.153 "state": "online", 00:10:45.153 "raid_level": "raid1", 00:10:45.153 "superblock": true, 00:10:45.153 "num_base_bdevs": 3, 00:10:45.153 "num_base_bdevs_discovered": 3, 00:10:45.153 "num_base_bdevs_operational": 3, 00:10:45.153 "base_bdevs_list": [ 00:10:45.153 { 00:10:45.153 "name": "pt1", 00:10:45.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 }, 00:10:45.153 { 00:10:45.153 "name": "pt2", 00:10:45.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 }, 00:10:45.153 { 00:10:45.153 "name": "pt3", 00:10:45.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.153 "is_configured": true, 00:10:45.153 "data_offset": 2048, 00:10:45.153 "data_size": 63488 00:10:45.153 } 00:10:45.153 ] 00:10:45.153 }' 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.153 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 [2024-11-04 11:43:10.977116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.723 11:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.723 "name": "raid_bdev1", 00:10:45.723 "aliases": [ 00:10:45.723 "e95bd943-cef1-4799-b245-d0f851d183d5" 00:10:45.723 ], 00:10:45.723 "product_name": "Raid Volume", 00:10:45.723 "block_size": 512, 00:10:45.723 "num_blocks": 63488, 00:10:45.723 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:45.723 "assigned_rate_limits": { 00:10:45.723 "rw_ios_per_sec": 0, 00:10:45.723 "rw_mbytes_per_sec": 0, 00:10:45.723 "r_mbytes_per_sec": 0, 00:10:45.723 "w_mbytes_per_sec": 0 00:10:45.723 }, 00:10:45.723 "claimed": false, 00:10:45.723 "zoned": false, 00:10:45.723 "supported_io_types": { 00:10:45.723 "read": true, 00:10:45.723 "write": true, 00:10:45.723 "unmap": false, 00:10:45.723 "flush": false, 00:10:45.723 "reset": true, 00:10:45.723 "nvme_admin": false, 00:10:45.723 "nvme_io": false, 00:10:45.723 "nvme_io_md": false, 00:10:45.723 "write_zeroes": true, 00:10:45.723 "zcopy": false, 00:10:45.723 "get_zone_info": false, 00:10:45.723 "zone_management": false, 00:10:45.723 "zone_append": false, 00:10:45.723 "compare": false, 00:10:45.723 "compare_and_write": false, 00:10:45.723 "abort": false, 00:10:45.723 "seek_hole": false, 00:10:45.723 "seek_data": false, 00:10:45.723 "copy": false, 00:10:45.723 "nvme_iov_md": false 00:10:45.723 }, 00:10:45.723 "memory_domains": [ 00:10:45.723 { 00:10:45.723 "dma_device_id": "system", 00:10:45.723 "dma_device_type": 1 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.723 "dma_device_type": 2 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "dma_device_id": "system", 00:10:45.723 "dma_device_type": 1 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.723 "dma_device_type": 2 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "dma_device_id": "system", 00:10:45.723 "dma_device_type": 1 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.723 "dma_device_type": 2 00:10:45.723 } 00:10:45.723 ], 00:10:45.723 "driver_specific": { 00:10:45.723 "raid": { 00:10:45.723 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:45.723 "strip_size_kb": 0, 00:10:45.723 "state": "online", 00:10:45.723 "raid_level": "raid1", 00:10:45.723 "superblock": true, 00:10:45.723 "num_base_bdevs": 3, 00:10:45.723 "num_base_bdevs_discovered": 3, 00:10:45.723 "num_base_bdevs_operational": 3, 00:10:45.723 "base_bdevs_list": [ 00:10:45.723 { 00:10:45.723 "name": "pt1", 00:10:45.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.723 "is_configured": true, 00:10:45.723 "data_offset": 2048, 00:10:45.723 "data_size": 63488 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "name": "pt2", 00:10:45.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.723 "is_configured": true, 00:10:45.723 "data_offset": 2048, 00:10:45.723 "data_size": 63488 00:10:45.723 }, 00:10:45.723 { 00:10:45.723 "name": "pt3", 00:10:45.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.723 "is_configured": true, 00:10:45.723 "data_offset": 2048, 00:10:45.723 "data_size": 63488 00:10:45.723 } 00:10:45.723 ] 00:10:45.723 } 00:10:45.723 } 00:10:45.723 }' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.723 pt2 00:10:45.723 pt3' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 [2024-11-04 11:43:11.236783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e95bd943-cef1-4799-b245-d0f851d183d5 '!=' e95bd943-cef1-4799-b245-d0f851d183d5 ']' 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.983 [2024-11-04 11:43:11.280364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.983 "name": "raid_bdev1", 00:10:45.983 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:45.983 "strip_size_kb": 0, 00:10:45.983 "state": "online", 00:10:45.983 "raid_level": "raid1", 00:10:45.983 "superblock": true, 00:10:45.983 "num_base_bdevs": 3, 00:10:45.983 "num_base_bdevs_discovered": 2, 00:10:45.983 "num_base_bdevs_operational": 2, 00:10:45.983 "base_bdevs_list": [ 00:10:45.983 { 00:10:45.983 "name": null, 00:10:45.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.983 "is_configured": false, 00:10:45.983 "data_offset": 0, 00:10:45.983 "data_size": 63488 00:10:45.983 }, 00:10:45.983 { 00:10:45.983 "name": "pt2", 00:10:45.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.983 "is_configured": true, 00:10:45.983 "data_offset": 2048, 00:10:45.983 "data_size": 63488 00:10:45.983 }, 00:10:45.983 { 00:10:45.983 "name": "pt3", 00:10:45.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.983 "is_configured": true, 00:10:45.983 "data_offset": 2048, 00:10:45.983 "data_size": 63488 00:10:45.983 } 00:10:45.983 ] 00:10:45.983 }' 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.983 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.243 [2024-11-04 11:43:11.695611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.243 [2024-11-04 11:43:11.695744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.243 [2024-11-04 11:43:11.695893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.243 [2024-11-04 11:43:11.695995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.243 [2024-11-04 11:43:11.696051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.243 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 [2024-11-04 11:43:11.775457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.503 [2024-11-04 11:43:11.775546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.503 [2024-11-04 11:43:11.775568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:46.503 [2024-11-04 11:43:11.775579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.503 [2024-11-04 11:43:11.778225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.503 [2024-11-04 11:43:11.778273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.503 [2024-11-04 11:43:11.778379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.503 [2024-11-04 11:43:11.778444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.503 pt2 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.503 "name": "raid_bdev1", 00:10:46.503 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:46.503 "strip_size_kb": 0, 00:10:46.503 "state": "configuring", 00:10:46.503 "raid_level": "raid1", 00:10:46.503 "superblock": true, 00:10:46.503 "num_base_bdevs": 3, 00:10:46.503 "num_base_bdevs_discovered": 1, 00:10:46.503 "num_base_bdevs_operational": 2, 00:10:46.503 "base_bdevs_list": [ 00:10:46.503 { 00:10:46.503 "name": null, 00:10:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.503 "is_configured": false, 00:10:46.503 "data_offset": 2048, 00:10:46.503 "data_size": 63488 00:10:46.503 }, 00:10:46.503 { 00:10:46.503 "name": "pt2", 00:10:46.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.503 "is_configured": true, 00:10:46.503 "data_offset": 2048, 00:10:46.503 "data_size": 63488 00:10:46.503 }, 00:10:46.503 { 00:10:46.503 "name": null, 00:10:46.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.503 "is_configured": false, 00:10:46.503 "data_offset": 2048, 00:10:46.503 "data_size": 63488 00:10:46.503 } 00:10:46.503 ] 00:10:46.503 }' 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.503 11:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 [2024-11-04 11:43:12.206749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.763 [2024-11-04 11:43:12.206923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.763 [2024-11-04 11:43:12.207001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:46.763 [2024-11-04 11:43:12.207060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.763 [2024-11-04 11:43:12.207673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.763 [2024-11-04 11:43:12.207759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.763 [2024-11-04 11:43:12.207927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.763 [2024-11-04 11:43:12.208008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.763 [2024-11-04 11:43:12.208221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.763 [2024-11-04 11:43:12.208275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.763 [2024-11-04 11:43:12.208629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:46.763 [2024-11-04 11:43:12.208871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.763 [2024-11-04 11:43:12.208921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:46.763 [2024-11-04 11:43:12.209182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.763 pt3 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.763 "name": "raid_bdev1", 00:10:46.763 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:46.763 "strip_size_kb": 0, 00:10:46.763 "state": "online", 00:10:46.763 "raid_level": "raid1", 00:10:46.763 "superblock": true, 00:10:46.763 "num_base_bdevs": 3, 00:10:46.763 "num_base_bdevs_discovered": 2, 00:10:46.763 "num_base_bdevs_operational": 2, 00:10:46.763 "base_bdevs_list": [ 00:10:46.763 { 00:10:46.763 "name": null, 00:10:46.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.763 "is_configured": false, 00:10:46.763 "data_offset": 2048, 00:10:46.763 "data_size": 63488 00:10:46.763 }, 00:10:46.763 { 00:10:46.763 "name": "pt2", 00:10:46.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.763 "is_configured": true, 00:10:46.763 "data_offset": 2048, 00:10:46.763 "data_size": 63488 00:10:46.763 }, 00:10:46.763 { 00:10:46.763 "name": "pt3", 00:10:46.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.763 "is_configured": true, 00:10:46.763 "data_offset": 2048, 00:10:46.763 "data_size": 63488 00:10:46.763 } 00:10:46.763 ] 00:10:46.763 }' 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.763 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 [2024-11-04 11:43:12.709870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.333 [2024-11-04 11:43:12.709978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.333 [2024-11-04 11:43:12.710090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.333 [2024-11-04 11:43:12.710158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.333 [2024-11-04 11:43:12.710169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 [2024-11-04 11:43:12.781811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.333 [2024-11-04 11:43:12.781900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.333 [2024-11-04 11:43:12.781927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:47.333 [2024-11-04 11:43:12.781937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.333 [2024-11-04 11:43:12.784531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.333 [2024-11-04 11:43:12.784595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.333 [2024-11-04 11:43:12.784707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.333 [2024-11-04 11:43:12.784769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.333 [2024-11-04 11:43:12.784911] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:47.333 [2024-11-04 11:43:12.784923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.333 [2024-11-04 11:43:12.784942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:47.333 [2024-11-04 11:43:12.785020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.333 pt1 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.333 "name": "raid_bdev1", 00:10:47.333 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:47.333 "strip_size_kb": 0, 00:10:47.333 "state": "configuring", 00:10:47.333 "raid_level": "raid1", 00:10:47.333 "superblock": true, 00:10:47.333 "num_base_bdevs": 3, 00:10:47.333 "num_base_bdevs_discovered": 1, 00:10:47.333 "num_base_bdevs_operational": 2, 00:10:47.333 "base_bdevs_list": [ 00:10:47.333 { 00:10:47.333 "name": null, 00:10:47.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.333 "is_configured": false, 00:10:47.333 "data_offset": 2048, 00:10:47.333 "data_size": 63488 00:10:47.333 }, 00:10:47.333 { 00:10:47.333 "name": "pt2", 00:10:47.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.333 "is_configured": true, 00:10:47.333 "data_offset": 2048, 00:10:47.333 "data_size": 63488 00:10:47.333 }, 00:10:47.333 { 00:10:47.333 "name": null, 00:10:47.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.333 "is_configured": false, 00:10:47.333 "data_offset": 2048, 00:10:47.333 "data_size": 63488 00:10:47.333 } 00:10:47.333 ] 00:10:47.333 }' 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.333 11:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.902 [2024-11-04 11:43:13.245015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.902 [2024-11-04 11:43:13.245164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.902 [2024-11-04 11:43:13.245209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:47.902 [2024-11-04 11:43:13.245242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.902 [2024-11-04 11:43:13.245832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.902 [2024-11-04 11:43:13.245898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.902 [2024-11-04 11:43:13.246042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:47.902 [2024-11-04 11:43:13.246131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.902 [2024-11-04 11:43:13.246307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:47.902 [2024-11-04 11:43:13.246346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.902 [2024-11-04 11:43:13.246690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:47.902 [2024-11-04 11:43:13.246932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:47.902 [2024-11-04 11:43:13.246980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:47.902 [2024-11-04 11:43:13.247193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.902 pt3 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.902 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.903 "name": "raid_bdev1", 00:10:47.903 "uuid": "e95bd943-cef1-4799-b245-d0f851d183d5", 00:10:47.903 "strip_size_kb": 0, 00:10:47.903 "state": "online", 00:10:47.903 "raid_level": "raid1", 00:10:47.903 "superblock": true, 00:10:47.903 "num_base_bdevs": 3, 00:10:47.903 "num_base_bdevs_discovered": 2, 00:10:47.903 "num_base_bdevs_operational": 2, 00:10:47.903 "base_bdevs_list": [ 00:10:47.903 { 00:10:47.903 "name": null, 00:10:47.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.903 "is_configured": false, 00:10:47.903 "data_offset": 2048, 00:10:47.903 "data_size": 63488 00:10:47.903 }, 00:10:47.903 { 00:10:47.903 "name": "pt2", 00:10:47.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.903 "is_configured": true, 00:10:47.903 "data_offset": 2048, 00:10:47.903 "data_size": 63488 00:10:47.903 }, 00:10:47.903 { 00:10:47.903 "name": "pt3", 00:10:47.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.903 "is_configured": true, 00:10:47.903 "data_offset": 2048, 00:10:47.903 "data_size": 63488 00:10:47.903 } 00:10:47.903 ] 00:10:47.903 }' 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.903 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.471 [2024-11-04 11:43:13.772479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e95bd943-cef1-4799-b245-d0f851d183d5 '!=' e95bd943-cef1-4799-b245-d0f851d183d5 ']' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68873 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68873 ']' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68873 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68873 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68873' 00:10:48.471 killing process with pid 68873 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68873 00:10:48.471 [2024-11-04 11:43:13.855193] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.471 [2024-11-04 11:43:13.855348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.471 11:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68873 00:10:48.471 [2024-11-04 11:43:13.855457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.471 [2024-11-04 11:43:13.855473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:48.731 [2024-11-04 11:43:14.159667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.110 11:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:50.110 00:10:50.110 real 0m7.837s 00:10:50.110 user 0m12.239s 00:10:50.110 sys 0m1.418s 00:10:50.110 11:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.110 11:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 ************************************ 00:10:50.110 END TEST raid_superblock_test 00:10:50.110 ************************************ 00:10:50.110 11:43:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:50.110 11:43:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:50.110 11:43:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.110 11:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 ************************************ 00:10:50.110 START TEST raid_read_error_test 00:10:50.110 ************************************ 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hpRwnjjYDt 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69319 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69319 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69319 ']' 00:10:50.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.110 11:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 [2024-11-04 11:43:15.493917] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:50.110 [2024-11-04 11:43:15.494121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69319 ] 00:10:50.370 [2024-11-04 11:43:15.669961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.370 [2024-11-04 11:43:15.790587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.629 [2024-11-04 11:43:16.004537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.629 [2024-11-04 11:43:16.004690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 BaseBdev1_malloc 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 true 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.888 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.888 [2024-11-04 11:43:16.408640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:50.888 [2024-11-04 11:43:16.408701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.888 [2024-11-04 11:43:16.408727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:50.888 [2024-11-04 11:43:16.408739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.148 [2024-11-04 11:43:16.411132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.148 [2024-11-04 11:43:16.411176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.148 BaseBdev1 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 BaseBdev2_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 true 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 [2024-11-04 11:43:16.475851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.148 [2024-11-04 11:43:16.475958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.148 [2024-11-04 11:43:16.475993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.148 [2024-11-04 11:43:16.476024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.148 [2024-11-04 11:43:16.478270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.148 [2024-11-04 11:43:16.478348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.148 BaseBdev2 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 BaseBdev3_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 true 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 [2024-11-04 11:43:16.553744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:51.148 [2024-11-04 11:43:16.553852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.148 [2024-11-04 11:43:16.553892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:51.148 [2024-11-04 11:43:16.553924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.148 [2024-11-04 11:43:16.556239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.148 [2024-11-04 11:43:16.556332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:51.148 BaseBdev3 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 [2024-11-04 11:43:16.565811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.148 [2024-11-04 11:43:16.567823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.148 [2024-11-04 11:43:16.567964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.148 [2024-11-04 11:43:16.568257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.148 [2024-11-04 11:43:16.568310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.148 [2024-11-04 11:43:16.568662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:51.148 [2024-11-04 11:43:16.568924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.148 [2024-11-04 11:43:16.568978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:51.148 [2024-11-04 11:43:16.569252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.148 "name": "raid_bdev1", 00:10:51.148 "uuid": "9a22480e-5bb2-4287-a274-88bc7151e013", 00:10:51.148 "strip_size_kb": 0, 00:10:51.148 "state": "online", 00:10:51.148 "raid_level": "raid1", 00:10:51.148 "superblock": true, 00:10:51.148 "num_base_bdevs": 3, 00:10:51.148 "num_base_bdevs_discovered": 3, 00:10:51.148 "num_base_bdevs_operational": 3, 00:10:51.148 "base_bdevs_list": [ 00:10:51.148 { 00:10:51.148 "name": "BaseBdev1", 00:10:51.148 "uuid": "8f834b12-aff8-52b1-b0c3-4ed6af8ae61a", 00:10:51.148 "is_configured": true, 00:10:51.148 "data_offset": 2048, 00:10:51.148 "data_size": 63488 00:10:51.148 }, 00:10:51.148 { 00:10:51.148 "name": "BaseBdev2", 00:10:51.148 "uuid": "a78382c8-f492-5942-ba74-af5c3cb098cb", 00:10:51.148 "is_configured": true, 00:10:51.148 "data_offset": 2048, 00:10:51.148 "data_size": 63488 00:10:51.148 }, 00:10:51.148 { 00:10:51.148 "name": "BaseBdev3", 00:10:51.148 "uuid": "0d2798e0-39cc-5a52-98f5-3be7a2c75eaf", 00:10:51.148 "is_configured": true, 00:10:51.148 "data_offset": 2048, 00:10:51.148 "data_size": 63488 00:10:51.148 } 00:10:51.148 ] 00:10:51.148 }' 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.148 11:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.716 11:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:51.716 11:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:51.717 [2024-11-04 11:43:17.110181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.655 "name": "raid_bdev1", 00:10:52.655 "uuid": "9a22480e-5bb2-4287-a274-88bc7151e013", 00:10:52.655 "strip_size_kb": 0, 00:10:52.655 "state": "online", 00:10:52.655 "raid_level": "raid1", 00:10:52.655 "superblock": true, 00:10:52.655 "num_base_bdevs": 3, 00:10:52.655 "num_base_bdevs_discovered": 3, 00:10:52.655 "num_base_bdevs_operational": 3, 00:10:52.655 "base_bdevs_list": [ 00:10:52.655 { 00:10:52.655 "name": "BaseBdev1", 00:10:52.655 "uuid": "8f834b12-aff8-52b1-b0c3-4ed6af8ae61a", 00:10:52.655 "is_configured": true, 00:10:52.655 "data_offset": 2048, 00:10:52.655 "data_size": 63488 00:10:52.655 }, 00:10:52.655 { 00:10:52.655 "name": "BaseBdev2", 00:10:52.655 "uuid": "a78382c8-f492-5942-ba74-af5c3cb098cb", 00:10:52.655 "is_configured": true, 00:10:52.655 "data_offset": 2048, 00:10:52.655 "data_size": 63488 00:10:52.655 }, 00:10:52.655 { 00:10:52.655 "name": "BaseBdev3", 00:10:52.655 "uuid": "0d2798e0-39cc-5a52-98f5-3be7a2c75eaf", 00:10:52.655 "is_configured": true, 00:10:52.655 "data_offset": 2048, 00:10:52.655 "data_size": 63488 00:10:52.655 } 00:10:52.655 ] 00:10:52.655 }' 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.655 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.224 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.224 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.224 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.224 [2024-11-04 11:43:18.458938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.224 [2024-11-04 11:43:18.459022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.224 [2024-11-04 11:43:18.462169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.224 [2024-11-04 11:43:18.462260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.225 [2024-11-04 11:43:18.462440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.225 [2024-11-04 11:43:18.462501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:53.225 { 00:10:53.225 "results": [ 00:10:53.225 { 00:10:53.225 "job": "raid_bdev1", 00:10:53.225 "core_mask": "0x1", 00:10:53.225 "workload": "randrw", 00:10:53.225 "percentage": 50, 00:10:53.225 "status": "finished", 00:10:53.225 "queue_depth": 1, 00:10:53.225 "io_size": 131072, 00:10:53.225 "runtime": 1.349516, 00:10:53.225 "iops": 12786.06552275038, 00:10:53.225 "mibps": 1598.2581903437974, 00:10:53.225 "io_failed": 0, 00:10:53.225 "io_timeout": 0, 00:10:53.225 "avg_latency_us": 75.46651003000207, 00:10:53.225 "min_latency_us": 23.923144104803495, 00:10:53.225 "max_latency_us": 1466.6899563318777 00:10:53.225 } 00:10:53.225 ], 00:10:53.225 "core_count": 1 00:10:53.225 } 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69319 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69319 ']' 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69319 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69319 00:10:53.225 killing process with pid 69319 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69319' 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69319 00:10:53.225 [2024-11-04 11:43:18.497294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.225 11:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69319 00:10:53.225 [2024-11-04 11:43:18.733062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hpRwnjjYDt 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:54.602 00:10:54.602 real 0m4.558s 00:10:54.602 user 0m5.421s 00:10:54.602 sys 0m0.540s 00:10:54.602 ************************************ 00:10:54.602 END TEST raid_read_error_test 00:10:54.602 ************************************ 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:54.602 11:43:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.602 11:43:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:54.602 11:43:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:54.602 11:43:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:54.602 11:43:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.602 ************************************ 00:10:54.602 START TEST raid_write_error_test 00:10:54.602 ************************************ 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bDMFZwNKZQ 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69459 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69459 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69459 ']' 00:10:54.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:54.602 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.861 [2024-11-04 11:43:20.127447] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:54.861 [2024-11-04 11:43:20.127574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69459 ] 00:10:54.861 [2024-11-04 11:43:20.308559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.120 [2024-11-04 11:43:20.425816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.120 [2024-11-04 11:43:20.631029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.120 [2024-11-04 11:43:20.631074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.693 BaseBdev1_malloc 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.693 11:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 true 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 [2024-11-04 11:43:21.009253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.694 [2024-11-04 11:43:21.009324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.694 [2024-11-04 11:43:21.009347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.694 [2024-11-04 11:43:21.009359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.694 [2024-11-04 11:43:21.011694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.694 [2024-11-04 11:43:21.011735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.694 BaseBdev1 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 BaseBdev2_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 true 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 [2024-11-04 11:43:21.075609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.694 [2024-11-04 11:43:21.075722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.694 [2024-11-04 11:43:21.075745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.694 [2024-11-04 11:43:21.075756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.694 [2024-11-04 11:43:21.078243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.694 [2024-11-04 11:43:21.078288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.694 BaseBdev2 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 BaseBdev3_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 true 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 [2024-11-04 11:43:21.155553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.694 [2024-11-04 11:43:21.155616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.694 [2024-11-04 11:43:21.155635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.694 [2024-11-04 11:43:21.155646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.694 [2024-11-04 11:43:21.157982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.694 [2024-11-04 11:43:21.158022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.694 BaseBdev3 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 [2024-11-04 11:43:21.167627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.694 [2024-11-04 11:43:21.169801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.694 [2024-11-04 11:43:21.169914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.694 [2024-11-04 11:43:21.170168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.694 [2024-11-04 11:43:21.170190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.694 [2024-11-04 11:43:21.170614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:55.694 [2024-11-04 11:43:21.170856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.694 [2024-11-04 11:43:21.170879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:55.694 [2024-11-04 11:43:21.171085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.694 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.959 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.959 "name": "raid_bdev1", 00:10:55.959 "uuid": "e4a34973-7e7b-44e0-9b24-2752996829b5", 00:10:55.959 "strip_size_kb": 0, 00:10:55.959 "state": "online", 00:10:55.959 "raid_level": "raid1", 00:10:55.959 "superblock": true, 00:10:55.959 "num_base_bdevs": 3, 00:10:55.959 "num_base_bdevs_discovered": 3, 00:10:55.959 "num_base_bdevs_operational": 3, 00:10:55.959 "base_bdevs_list": [ 00:10:55.959 { 00:10:55.959 "name": "BaseBdev1", 00:10:55.959 "uuid": "ef14df4c-df9f-5bb8-889a-5451c67db03e", 00:10:55.959 "is_configured": true, 00:10:55.959 "data_offset": 2048, 00:10:55.959 "data_size": 63488 00:10:55.959 }, 00:10:55.959 { 00:10:55.959 "name": "BaseBdev2", 00:10:55.959 "uuid": "50e158a6-dd70-5acf-8a59-ac3e07fad451", 00:10:55.959 "is_configured": true, 00:10:55.959 "data_offset": 2048, 00:10:55.959 "data_size": 63488 00:10:55.959 }, 00:10:55.959 { 00:10:55.959 "name": "BaseBdev3", 00:10:55.959 "uuid": "fe87d99a-b464-50b6-bad8-b03952cf208a", 00:10:55.959 "is_configured": true, 00:10:55.959 "data_offset": 2048, 00:10:55.960 "data_size": 63488 00:10:55.960 } 00:10:55.960 ] 00:10:55.960 }' 00:10:55.960 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.960 11:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.219 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.219 11:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.219 [2024-11-04 11:43:21.687874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.158 [2024-11-04 11:43:22.620582] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:57.158 [2024-11-04 11:43:22.620656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.158 [2024-11-04 11:43:22.620880] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.158 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.417 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.417 "name": "raid_bdev1", 00:10:57.417 "uuid": "e4a34973-7e7b-44e0-9b24-2752996829b5", 00:10:57.417 "strip_size_kb": 0, 00:10:57.417 "state": "online", 00:10:57.417 "raid_level": "raid1", 00:10:57.417 "superblock": true, 00:10:57.417 "num_base_bdevs": 3, 00:10:57.417 "num_base_bdevs_discovered": 2, 00:10:57.417 "num_base_bdevs_operational": 2, 00:10:57.417 "base_bdevs_list": [ 00:10:57.417 { 00:10:57.417 "name": null, 00:10:57.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.417 "is_configured": false, 00:10:57.417 "data_offset": 0, 00:10:57.417 "data_size": 63488 00:10:57.417 }, 00:10:57.417 { 00:10:57.417 "name": "BaseBdev2", 00:10:57.417 "uuid": "50e158a6-dd70-5acf-8a59-ac3e07fad451", 00:10:57.417 "is_configured": true, 00:10:57.417 "data_offset": 2048, 00:10:57.417 "data_size": 63488 00:10:57.417 }, 00:10:57.417 { 00:10:57.417 "name": "BaseBdev3", 00:10:57.417 "uuid": "fe87d99a-b464-50b6-bad8-b03952cf208a", 00:10:57.417 "is_configured": true, 00:10:57.417 "data_offset": 2048, 00:10:57.417 "data_size": 63488 00:10:57.417 } 00:10:57.417 ] 00:10:57.417 }' 00:10:57.417 11:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.417 11:43:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.676 [2024-11-04 11:43:23.112436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.676 [2024-11-04 11:43:23.112478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.676 [2024-11-04 11:43:23.115202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.676 [2024-11-04 11:43:23.115281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.676 [2024-11-04 11:43:23.115360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.676 [2024-11-04 11:43:23.115376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:57.676 { 00:10:57.676 "results": [ 00:10:57.676 { 00:10:57.676 "job": "raid_bdev1", 00:10:57.676 "core_mask": "0x1", 00:10:57.676 "workload": "randrw", 00:10:57.676 "percentage": 50, 00:10:57.676 "status": "finished", 00:10:57.676 "queue_depth": 1, 00:10:57.676 "io_size": 131072, 00:10:57.676 "runtime": 1.425397, 00:10:57.676 "iops": 14109.051723835535, 00:10:57.676 "mibps": 1763.6314654794419, 00:10:57.676 "io_failed": 0, 00:10:57.676 "io_timeout": 0, 00:10:57.676 "avg_latency_us": 68.10509376019859, 00:10:57.676 "min_latency_us": 23.475982532751093, 00:10:57.676 "max_latency_us": 1502.46288209607 00:10:57.676 } 00:10:57.676 ], 00:10:57.676 "core_count": 1 00:10:57.676 } 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69459 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69459 ']' 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69459 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69459 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.676 killing process with pid 69459 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69459' 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69459 00:10:57.676 [2024-11-04 11:43:23.160795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.676 11:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69459 00:10:57.935 [2024-11-04 11:43:23.399816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bDMFZwNKZQ 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:59.314 00:10:59.314 real 0m4.580s 00:10:59.314 user 0m5.467s 00:10:59.314 sys 0m0.541s 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.314 11:43:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.314 ************************************ 00:10:59.314 END TEST raid_write_error_test 00:10:59.314 ************************************ 00:10:59.314 11:43:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:59.314 11:43:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:59.314 11:43:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:59.314 11:43:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:59.314 11:43:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.314 11:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.314 ************************************ 00:10:59.314 START TEST raid_state_function_test 00:10:59.314 ************************************ 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.314 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69608 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69608' 00:10:59.315 Process raid pid: 69608 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69608 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69608 ']' 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.315 11:43:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.315 [2024-11-04 11:43:24.760164] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:10:59.315 [2024-11-04 11:43:24.760298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.574 [2024-11-04 11:43:24.936190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.574 [2024-11-04 11:43:25.051275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.833 [2024-11-04 11:43:25.262242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.833 [2024-11-04 11:43:25.262289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.405 [2024-11-04 11:43:25.639338] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.405 [2024-11-04 11:43:25.639427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.405 [2024-11-04 11:43:25.639441] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.405 [2024-11-04 11:43:25.639452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.405 [2024-11-04 11:43:25.639460] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.405 [2024-11-04 11:43:25.639471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.405 [2024-11-04 11:43:25.639478] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.405 [2024-11-04 11:43:25.639488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.405 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.406 "name": "Existed_Raid", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "strip_size_kb": 64, 00:11:00.406 "state": "configuring", 00:11:00.406 "raid_level": "raid0", 00:11:00.406 "superblock": false, 00:11:00.406 "num_base_bdevs": 4, 00:11:00.406 "num_base_bdevs_discovered": 0, 00:11:00.406 "num_base_bdevs_operational": 4, 00:11:00.406 "base_bdevs_list": [ 00:11:00.406 { 00:11:00.406 "name": "BaseBdev1", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "is_configured": false, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 0 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": "BaseBdev2", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "is_configured": false, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 0 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": "BaseBdev3", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "is_configured": false, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 0 00:11:00.406 }, 00:11:00.406 { 00:11:00.406 "name": "BaseBdev4", 00:11:00.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.406 "is_configured": false, 00:11:00.406 "data_offset": 0, 00:11:00.406 "data_size": 0 00:11:00.406 } 00:11:00.406 ] 00:11:00.406 }' 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.406 11:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 [2024-11-04 11:43:26.102524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.676 [2024-11-04 11:43:26.102573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 [2024-11-04 11:43:26.114475] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.676 [2024-11-04 11:43:26.114525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.676 [2024-11-04 11:43:26.114535] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.676 [2024-11-04 11:43:26.114545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.676 [2024-11-04 11:43:26.114553] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.676 [2024-11-04 11:43:26.114563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.676 [2024-11-04 11:43:26.114570] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.676 [2024-11-04 11:43:26.114579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 [2024-11-04 11:43:26.167414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.676 BaseBdev1 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.676 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 [ 00:11:00.676 { 00:11:00.676 "name": "BaseBdev1", 00:11:00.676 "aliases": [ 00:11:00.676 "ee9c0039-e7c6-4295-961f-c91c13a22fa3" 00:11:00.676 ], 00:11:00.676 "product_name": "Malloc disk", 00:11:00.676 "block_size": 512, 00:11:00.676 "num_blocks": 65536, 00:11:00.676 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:00.676 "assigned_rate_limits": { 00:11:00.676 "rw_ios_per_sec": 0, 00:11:00.676 "rw_mbytes_per_sec": 0, 00:11:00.676 "r_mbytes_per_sec": 0, 00:11:00.676 "w_mbytes_per_sec": 0 00:11:00.676 }, 00:11:00.676 "claimed": true, 00:11:00.676 "claim_type": "exclusive_write", 00:11:00.676 "zoned": false, 00:11:00.676 "supported_io_types": { 00:11:00.676 "read": true, 00:11:00.676 "write": true, 00:11:00.676 "unmap": true, 00:11:00.676 "flush": true, 00:11:00.676 "reset": true, 00:11:00.677 "nvme_admin": false, 00:11:00.936 "nvme_io": false, 00:11:00.936 "nvme_io_md": false, 00:11:00.936 "write_zeroes": true, 00:11:00.936 "zcopy": true, 00:11:00.936 "get_zone_info": false, 00:11:00.936 "zone_management": false, 00:11:00.936 "zone_append": false, 00:11:00.936 "compare": false, 00:11:00.936 "compare_and_write": false, 00:11:00.936 "abort": true, 00:11:00.936 "seek_hole": false, 00:11:00.936 "seek_data": false, 00:11:00.936 "copy": true, 00:11:00.936 "nvme_iov_md": false 00:11:00.936 }, 00:11:00.936 "memory_domains": [ 00:11:00.936 { 00:11:00.936 "dma_device_id": "system", 00:11:00.936 "dma_device_type": 1 00:11:00.936 }, 00:11:00.936 { 00:11:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.936 "dma_device_type": 2 00:11:00.936 } 00:11:00.936 ], 00:11:00.936 "driver_specific": {} 00:11:00.936 } 00:11:00.936 ] 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.936 "name": "Existed_Raid", 00:11:00.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.936 "strip_size_kb": 64, 00:11:00.936 "state": "configuring", 00:11:00.936 "raid_level": "raid0", 00:11:00.936 "superblock": false, 00:11:00.936 "num_base_bdevs": 4, 00:11:00.936 "num_base_bdevs_discovered": 1, 00:11:00.936 "num_base_bdevs_operational": 4, 00:11:00.936 "base_bdevs_list": [ 00:11:00.936 { 00:11:00.936 "name": "BaseBdev1", 00:11:00.936 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:00.936 "is_configured": true, 00:11:00.936 "data_offset": 0, 00:11:00.936 "data_size": 65536 00:11:00.936 }, 00:11:00.936 { 00:11:00.936 "name": "BaseBdev2", 00:11:00.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.936 "is_configured": false, 00:11:00.936 "data_offset": 0, 00:11:00.936 "data_size": 0 00:11:00.936 }, 00:11:00.936 { 00:11:00.936 "name": "BaseBdev3", 00:11:00.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.936 "is_configured": false, 00:11:00.936 "data_offset": 0, 00:11:00.936 "data_size": 0 00:11:00.936 }, 00:11:00.936 { 00:11:00.936 "name": "BaseBdev4", 00:11:00.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.936 "is_configured": false, 00:11:00.936 "data_offset": 0, 00:11:00.936 "data_size": 0 00:11:00.936 } 00:11:00.936 ] 00:11:00.936 }' 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.936 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.195 [2024-11-04 11:43:26.590739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.195 [2024-11-04 11:43:26.590805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.195 [2024-11-04 11:43:26.602783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.195 [2024-11-04 11:43:26.604835] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.195 [2024-11-04 11:43:26.604882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.195 [2024-11-04 11:43:26.604894] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.195 [2024-11-04 11:43:26.604906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.195 [2024-11-04 11:43:26.604913] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.195 [2024-11-04 11:43:26.604922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:01.195 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.196 "name": "Existed_Raid", 00:11:01.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.196 "strip_size_kb": 64, 00:11:01.196 "state": "configuring", 00:11:01.196 "raid_level": "raid0", 00:11:01.196 "superblock": false, 00:11:01.196 "num_base_bdevs": 4, 00:11:01.196 "num_base_bdevs_discovered": 1, 00:11:01.196 "num_base_bdevs_operational": 4, 00:11:01.196 "base_bdevs_list": [ 00:11:01.196 { 00:11:01.196 "name": "BaseBdev1", 00:11:01.196 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:01.196 "is_configured": true, 00:11:01.196 "data_offset": 0, 00:11:01.196 "data_size": 65536 00:11:01.196 }, 00:11:01.196 { 00:11:01.196 "name": "BaseBdev2", 00:11:01.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.196 "is_configured": false, 00:11:01.196 "data_offset": 0, 00:11:01.196 "data_size": 0 00:11:01.196 }, 00:11:01.196 { 00:11:01.196 "name": "BaseBdev3", 00:11:01.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.196 "is_configured": false, 00:11:01.196 "data_offset": 0, 00:11:01.196 "data_size": 0 00:11:01.196 }, 00:11:01.196 { 00:11:01.196 "name": "BaseBdev4", 00:11:01.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.196 "is_configured": false, 00:11:01.196 "data_offset": 0, 00:11:01.196 "data_size": 0 00:11:01.196 } 00:11:01.196 ] 00:11:01.196 }' 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.196 11:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.763 [2024-11-04 11:43:27.102357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.763 BaseBdev2 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.763 [ 00:11:01.763 { 00:11:01.763 "name": "BaseBdev2", 00:11:01.763 "aliases": [ 00:11:01.763 "a3982b3e-8d25-415b-96bd-561c79e45706" 00:11:01.763 ], 00:11:01.763 "product_name": "Malloc disk", 00:11:01.763 "block_size": 512, 00:11:01.763 "num_blocks": 65536, 00:11:01.763 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:01.763 "assigned_rate_limits": { 00:11:01.763 "rw_ios_per_sec": 0, 00:11:01.763 "rw_mbytes_per_sec": 0, 00:11:01.763 "r_mbytes_per_sec": 0, 00:11:01.763 "w_mbytes_per_sec": 0 00:11:01.763 }, 00:11:01.763 "claimed": true, 00:11:01.763 "claim_type": "exclusive_write", 00:11:01.763 "zoned": false, 00:11:01.763 "supported_io_types": { 00:11:01.763 "read": true, 00:11:01.763 "write": true, 00:11:01.763 "unmap": true, 00:11:01.763 "flush": true, 00:11:01.763 "reset": true, 00:11:01.763 "nvme_admin": false, 00:11:01.763 "nvme_io": false, 00:11:01.763 "nvme_io_md": false, 00:11:01.763 "write_zeroes": true, 00:11:01.763 "zcopy": true, 00:11:01.763 "get_zone_info": false, 00:11:01.763 "zone_management": false, 00:11:01.763 "zone_append": false, 00:11:01.763 "compare": false, 00:11:01.763 "compare_and_write": false, 00:11:01.763 "abort": true, 00:11:01.763 "seek_hole": false, 00:11:01.763 "seek_data": false, 00:11:01.763 "copy": true, 00:11:01.763 "nvme_iov_md": false 00:11:01.763 }, 00:11:01.763 "memory_domains": [ 00:11:01.763 { 00:11:01.763 "dma_device_id": "system", 00:11:01.763 "dma_device_type": 1 00:11:01.763 }, 00:11:01.763 { 00:11:01.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.763 "dma_device_type": 2 00:11:01.763 } 00:11:01.763 ], 00:11:01.763 "driver_specific": {} 00:11:01.763 } 00:11:01.763 ] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.763 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.763 "name": "Existed_Raid", 00:11:01.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.763 "strip_size_kb": 64, 00:11:01.763 "state": "configuring", 00:11:01.763 "raid_level": "raid0", 00:11:01.763 "superblock": false, 00:11:01.763 "num_base_bdevs": 4, 00:11:01.763 "num_base_bdevs_discovered": 2, 00:11:01.763 "num_base_bdevs_operational": 4, 00:11:01.763 "base_bdevs_list": [ 00:11:01.763 { 00:11:01.763 "name": "BaseBdev1", 00:11:01.763 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:01.763 "is_configured": true, 00:11:01.764 "data_offset": 0, 00:11:01.764 "data_size": 65536 00:11:01.764 }, 00:11:01.764 { 00:11:01.764 "name": "BaseBdev2", 00:11:01.764 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:01.764 "is_configured": true, 00:11:01.764 "data_offset": 0, 00:11:01.764 "data_size": 65536 00:11:01.764 }, 00:11:01.764 { 00:11:01.764 "name": "BaseBdev3", 00:11:01.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.764 "is_configured": false, 00:11:01.764 "data_offset": 0, 00:11:01.764 "data_size": 0 00:11:01.764 }, 00:11:01.764 { 00:11:01.764 "name": "BaseBdev4", 00:11:01.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.764 "is_configured": false, 00:11:01.764 "data_offset": 0, 00:11:01.764 "data_size": 0 00:11:01.764 } 00:11:01.764 ] 00:11:01.764 }' 00:11:01.764 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.764 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.332 [2024-11-04 11:43:27.644126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.332 BaseBdev3 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.332 [ 00:11:02.332 { 00:11:02.332 "name": "BaseBdev3", 00:11:02.332 "aliases": [ 00:11:02.332 "719b84aa-1534-4036-a168-3addcf1dd49a" 00:11:02.332 ], 00:11:02.332 "product_name": "Malloc disk", 00:11:02.332 "block_size": 512, 00:11:02.332 "num_blocks": 65536, 00:11:02.332 "uuid": "719b84aa-1534-4036-a168-3addcf1dd49a", 00:11:02.332 "assigned_rate_limits": { 00:11:02.332 "rw_ios_per_sec": 0, 00:11:02.332 "rw_mbytes_per_sec": 0, 00:11:02.332 "r_mbytes_per_sec": 0, 00:11:02.332 "w_mbytes_per_sec": 0 00:11:02.332 }, 00:11:02.332 "claimed": true, 00:11:02.332 "claim_type": "exclusive_write", 00:11:02.332 "zoned": false, 00:11:02.332 "supported_io_types": { 00:11:02.332 "read": true, 00:11:02.332 "write": true, 00:11:02.332 "unmap": true, 00:11:02.332 "flush": true, 00:11:02.332 "reset": true, 00:11:02.332 "nvme_admin": false, 00:11:02.332 "nvme_io": false, 00:11:02.332 "nvme_io_md": false, 00:11:02.332 "write_zeroes": true, 00:11:02.332 "zcopy": true, 00:11:02.332 "get_zone_info": false, 00:11:02.332 "zone_management": false, 00:11:02.332 "zone_append": false, 00:11:02.332 "compare": false, 00:11:02.332 "compare_and_write": false, 00:11:02.332 "abort": true, 00:11:02.332 "seek_hole": false, 00:11:02.332 "seek_data": false, 00:11:02.332 "copy": true, 00:11:02.332 "nvme_iov_md": false 00:11:02.332 }, 00:11:02.332 "memory_domains": [ 00:11:02.332 { 00:11:02.332 "dma_device_id": "system", 00:11:02.332 "dma_device_type": 1 00:11:02.332 }, 00:11:02.332 { 00:11:02.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.332 "dma_device_type": 2 00:11:02.332 } 00:11:02.332 ], 00:11:02.332 "driver_specific": {} 00:11:02.332 } 00:11:02.332 ] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.332 "name": "Existed_Raid", 00:11:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.332 "strip_size_kb": 64, 00:11:02.332 "state": "configuring", 00:11:02.332 "raid_level": "raid0", 00:11:02.332 "superblock": false, 00:11:02.332 "num_base_bdevs": 4, 00:11:02.332 "num_base_bdevs_discovered": 3, 00:11:02.332 "num_base_bdevs_operational": 4, 00:11:02.332 "base_bdevs_list": [ 00:11:02.332 { 00:11:02.332 "name": "BaseBdev1", 00:11:02.332 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:02.332 "is_configured": true, 00:11:02.332 "data_offset": 0, 00:11:02.332 "data_size": 65536 00:11:02.332 }, 00:11:02.332 { 00:11:02.332 "name": "BaseBdev2", 00:11:02.332 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:02.332 "is_configured": true, 00:11:02.332 "data_offset": 0, 00:11:02.332 "data_size": 65536 00:11:02.332 }, 00:11:02.332 { 00:11:02.332 "name": "BaseBdev3", 00:11:02.332 "uuid": "719b84aa-1534-4036-a168-3addcf1dd49a", 00:11:02.332 "is_configured": true, 00:11:02.332 "data_offset": 0, 00:11:02.332 "data_size": 65536 00:11:02.332 }, 00:11:02.332 { 00:11:02.332 "name": "BaseBdev4", 00:11:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.332 "is_configured": false, 00:11:02.332 "data_offset": 0, 00:11:02.332 "data_size": 0 00:11:02.332 } 00:11:02.332 ] 00:11:02.332 }' 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.332 11:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.902 [2024-11-04 11:43:28.179523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.902 [2024-11-04 11:43:28.179581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.902 [2024-11-04 11:43:28.179592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:02.902 [2024-11-04 11:43:28.179887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:02.902 [2024-11-04 11:43:28.180138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.902 [2024-11-04 11:43:28.180164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:02.902 [2024-11-04 11:43:28.180506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.902 BaseBdev4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.902 [ 00:11:02.902 { 00:11:02.902 "name": "BaseBdev4", 00:11:02.902 "aliases": [ 00:11:02.902 "ff09d06a-2d87-4534-9b34-cfcbb702c676" 00:11:02.902 ], 00:11:02.902 "product_name": "Malloc disk", 00:11:02.902 "block_size": 512, 00:11:02.902 "num_blocks": 65536, 00:11:02.902 "uuid": "ff09d06a-2d87-4534-9b34-cfcbb702c676", 00:11:02.902 "assigned_rate_limits": { 00:11:02.902 "rw_ios_per_sec": 0, 00:11:02.902 "rw_mbytes_per_sec": 0, 00:11:02.902 "r_mbytes_per_sec": 0, 00:11:02.902 "w_mbytes_per_sec": 0 00:11:02.902 }, 00:11:02.902 "claimed": true, 00:11:02.902 "claim_type": "exclusive_write", 00:11:02.902 "zoned": false, 00:11:02.902 "supported_io_types": { 00:11:02.902 "read": true, 00:11:02.902 "write": true, 00:11:02.902 "unmap": true, 00:11:02.902 "flush": true, 00:11:02.902 "reset": true, 00:11:02.902 "nvme_admin": false, 00:11:02.902 "nvme_io": false, 00:11:02.902 "nvme_io_md": false, 00:11:02.902 "write_zeroes": true, 00:11:02.902 "zcopy": true, 00:11:02.902 "get_zone_info": false, 00:11:02.902 "zone_management": false, 00:11:02.902 "zone_append": false, 00:11:02.902 "compare": false, 00:11:02.902 "compare_and_write": false, 00:11:02.902 "abort": true, 00:11:02.902 "seek_hole": false, 00:11:02.902 "seek_data": false, 00:11:02.902 "copy": true, 00:11:02.902 "nvme_iov_md": false 00:11:02.902 }, 00:11:02.902 "memory_domains": [ 00:11:02.902 { 00:11:02.902 "dma_device_id": "system", 00:11:02.902 "dma_device_type": 1 00:11:02.902 }, 00:11:02.902 { 00:11:02.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.902 "dma_device_type": 2 00:11:02.902 } 00:11:02.902 ], 00:11:02.902 "driver_specific": {} 00:11:02.902 } 00:11:02.902 ] 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.902 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.903 "name": "Existed_Raid", 00:11:02.903 "uuid": "2b5a058c-b99a-45bd-ba9d-96cf0dfe7732", 00:11:02.903 "strip_size_kb": 64, 00:11:02.903 "state": "online", 00:11:02.903 "raid_level": "raid0", 00:11:02.903 "superblock": false, 00:11:02.903 "num_base_bdevs": 4, 00:11:02.903 "num_base_bdevs_discovered": 4, 00:11:02.903 "num_base_bdevs_operational": 4, 00:11:02.903 "base_bdevs_list": [ 00:11:02.903 { 00:11:02.903 "name": "BaseBdev1", 00:11:02.903 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:02.903 "is_configured": true, 00:11:02.903 "data_offset": 0, 00:11:02.903 "data_size": 65536 00:11:02.903 }, 00:11:02.903 { 00:11:02.903 "name": "BaseBdev2", 00:11:02.903 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:02.903 "is_configured": true, 00:11:02.903 "data_offset": 0, 00:11:02.903 "data_size": 65536 00:11:02.903 }, 00:11:02.903 { 00:11:02.903 "name": "BaseBdev3", 00:11:02.903 "uuid": "719b84aa-1534-4036-a168-3addcf1dd49a", 00:11:02.903 "is_configured": true, 00:11:02.903 "data_offset": 0, 00:11:02.903 "data_size": 65536 00:11:02.903 }, 00:11:02.903 { 00:11:02.903 "name": "BaseBdev4", 00:11:02.903 "uuid": "ff09d06a-2d87-4534-9b34-cfcbb702c676", 00:11:02.903 "is_configured": true, 00:11:02.903 "data_offset": 0, 00:11:02.903 "data_size": 65536 00:11:02.903 } 00:11:02.903 ] 00:11:02.903 }' 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.903 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.172 [2024-11-04 11:43:28.659126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.172 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.432 "name": "Existed_Raid", 00:11:03.432 "aliases": [ 00:11:03.432 "2b5a058c-b99a-45bd-ba9d-96cf0dfe7732" 00:11:03.432 ], 00:11:03.432 "product_name": "Raid Volume", 00:11:03.432 "block_size": 512, 00:11:03.432 "num_blocks": 262144, 00:11:03.432 "uuid": "2b5a058c-b99a-45bd-ba9d-96cf0dfe7732", 00:11:03.432 "assigned_rate_limits": { 00:11:03.432 "rw_ios_per_sec": 0, 00:11:03.432 "rw_mbytes_per_sec": 0, 00:11:03.432 "r_mbytes_per_sec": 0, 00:11:03.432 "w_mbytes_per_sec": 0 00:11:03.432 }, 00:11:03.432 "claimed": false, 00:11:03.432 "zoned": false, 00:11:03.432 "supported_io_types": { 00:11:03.432 "read": true, 00:11:03.432 "write": true, 00:11:03.432 "unmap": true, 00:11:03.432 "flush": true, 00:11:03.432 "reset": true, 00:11:03.432 "nvme_admin": false, 00:11:03.432 "nvme_io": false, 00:11:03.432 "nvme_io_md": false, 00:11:03.432 "write_zeroes": true, 00:11:03.432 "zcopy": false, 00:11:03.432 "get_zone_info": false, 00:11:03.432 "zone_management": false, 00:11:03.432 "zone_append": false, 00:11:03.432 "compare": false, 00:11:03.432 "compare_and_write": false, 00:11:03.432 "abort": false, 00:11:03.432 "seek_hole": false, 00:11:03.432 "seek_data": false, 00:11:03.432 "copy": false, 00:11:03.432 "nvme_iov_md": false 00:11:03.432 }, 00:11:03.432 "memory_domains": [ 00:11:03.432 { 00:11:03.432 "dma_device_id": "system", 00:11:03.432 "dma_device_type": 1 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.432 "dma_device_type": 2 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "system", 00:11:03.432 "dma_device_type": 1 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.432 "dma_device_type": 2 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "system", 00:11:03.432 "dma_device_type": 1 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.432 "dma_device_type": 2 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "system", 00:11:03.432 "dma_device_type": 1 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.432 "dma_device_type": 2 00:11:03.432 } 00:11:03.432 ], 00:11:03.432 "driver_specific": { 00:11:03.432 "raid": { 00:11:03.432 "uuid": "2b5a058c-b99a-45bd-ba9d-96cf0dfe7732", 00:11:03.432 "strip_size_kb": 64, 00:11:03.432 "state": "online", 00:11:03.432 "raid_level": "raid0", 00:11:03.432 "superblock": false, 00:11:03.432 "num_base_bdevs": 4, 00:11:03.432 "num_base_bdevs_discovered": 4, 00:11:03.432 "num_base_bdevs_operational": 4, 00:11:03.432 "base_bdevs_list": [ 00:11:03.432 { 00:11:03.432 "name": "BaseBdev1", 00:11:03.432 "uuid": "ee9c0039-e7c6-4295-961f-c91c13a22fa3", 00:11:03.432 "is_configured": true, 00:11:03.432 "data_offset": 0, 00:11:03.432 "data_size": 65536 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "name": "BaseBdev2", 00:11:03.432 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:03.432 "is_configured": true, 00:11:03.432 "data_offset": 0, 00:11:03.432 "data_size": 65536 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "name": "BaseBdev3", 00:11:03.432 "uuid": "719b84aa-1534-4036-a168-3addcf1dd49a", 00:11:03.432 "is_configured": true, 00:11:03.432 "data_offset": 0, 00:11:03.432 "data_size": 65536 00:11:03.432 }, 00:11:03.432 { 00:11:03.432 "name": "BaseBdev4", 00:11:03.432 "uuid": "ff09d06a-2d87-4534-9b34-cfcbb702c676", 00:11:03.432 "is_configured": true, 00:11:03.432 "data_offset": 0, 00:11:03.432 "data_size": 65536 00:11:03.432 } 00:11:03.432 ] 00:11:03.432 } 00:11:03.432 } 00:11:03.432 }' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:03.432 BaseBdev2 00:11:03.432 BaseBdev3 00:11:03.432 BaseBdev4' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.432 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.691 11:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.691 [2024-11-04 11:43:28.974315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.691 [2024-11-04 11:43:28.974353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.691 [2024-11-04 11:43:28.974423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.691 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.691 "name": "Existed_Raid", 00:11:03.691 "uuid": "2b5a058c-b99a-45bd-ba9d-96cf0dfe7732", 00:11:03.691 "strip_size_kb": 64, 00:11:03.691 "state": "offline", 00:11:03.691 "raid_level": "raid0", 00:11:03.691 "superblock": false, 00:11:03.691 "num_base_bdevs": 4, 00:11:03.691 "num_base_bdevs_discovered": 3, 00:11:03.691 "num_base_bdevs_operational": 3, 00:11:03.691 "base_bdevs_list": [ 00:11:03.691 { 00:11:03.691 "name": null, 00:11:03.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.691 "is_configured": false, 00:11:03.691 "data_offset": 0, 00:11:03.691 "data_size": 65536 00:11:03.691 }, 00:11:03.691 { 00:11:03.691 "name": "BaseBdev2", 00:11:03.691 "uuid": "a3982b3e-8d25-415b-96bd-561c79e45706", 00:11:03.691 "is_configured": true, 00:11:03.691 "data_offset": 0, 00:11:03.691 "data_size": 65536 00:11:03.691 }, 00:11:03.691 { 00:11:03.691 "name": "BaseBdev3", 00:11:03.691 "uuid": "719b84aa-1534-4036-a168-3addcf1dd49a", 00:11:03.691 "is_configured": true, 00:11:03.691 "data_offset": 0, 00:11:03.691 "data_size": 65536 00:11:03.691 }, 00:11:03.691 { 00:11:03.691 "name": "BaseBdev4", 00:11:03.691 "uuid": "ff09d06a-2d87-4534-9b34-cfcbb702c676", 00:11:03.691 "is_configured": true, 00:11:03.691 "data_offset": 0, 00:11:03.692 "data_size": 65536 00:11:03.692 } 00:11:03.692 ] 00:11:03.692 }' 00:11:03.692 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.692 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:04.262 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.263 [2024-11-04 11:43:29.568320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.263 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.263 [2024-11-04 11:43:29.721021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.540 [2024-11-04 11:43:29.879944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:04.540 [2024-11-04 11:43:29.880004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.540 11:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.540 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.799 BaseBdev2 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.799 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.799 [ 00:11:04.799 { 00:11:04.799 "name": "BaseBdev2", 00:11:04.799 "aliases": [ 00:11:04.799 "81c97e40-18a3-4085-8245-ce46ba539823" 00:11:04.799 ], 00:11:04.799 "product_name": "Malloc disk", 00:11:04.799 "block_size": 512, 00:11:04.799 "num_blocks": 65536, 00:11:04.799 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:04.799 "assigned_rate_limits": { 00:11:04.799 "rw_ios_per_sec": 0, 00:11:04.799 "rw_mbytes_per_sec": 0, 00:11:04.799 "r_mbytes_per_sec": 0, 00:11:04.799 "w_mbytes_per_sec": 0 00:11:04.799 }, 00:11:04.799 "claimed": false, 00:11:04.799 "zoned": false, 00:11:04.799 "supported_io_types": { 00:11:04.799 "read": true, 00:11:04.799 "write": true, 00:11:04.799 "unmap": true, 00:11:04.799 "flush": true, 00:11:04.799 "reset": true, 00:11:04.800 "nvme_admin": false, 00:11:04.800 "nvme_io": false, 00:11:04.800 "nvme_io_md": false, 00:11:04.800 "write_zeroes": true, 00:11:04.800 "zcopy": true, 00:11:04.800 "get_zone_info": false, 00:11:04.800 "zone_management": false, 00:11:04.800 "zone_append": false, 00:11:04.800 "compare": false, 00:11:04.800 "compare_and_write": false, 00:11:04.800 "abort": true, 00:11:04.800 "seek_hole": false, 00:11:04.800 "seek_data": false, 00:11:04.800 "copy": true, 00:11:04.800 "nvme_iov_md": false 00:11:04.800 }, 00:11:04.800 "memory_domains": [ 00:11:04.800 { 00:11:04.800 "dma_device_id": "system", 00:11:04.800 "dma_device_type": 1 00:11:04.800 }, 00:11:04.800 { 00:11:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.800 "dma_device_type": 2 00:11:04.800 } 00:11:04.800 ], 00:11:04.800 "driver_specific": {} 00:11:04.800 } 00:11:04.800 ] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 BaseBdev3 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 [ 00:11:04.800 { 00:11:04.800 "name": "BaseBdev3", 00:11:04.800 "aliases": [ 00:11:04.800 "d2bce5c2-d606-4ae3-a61c-ea3517cf2131" 00:11:04.800 ], 00:11:04.800 "product_name": "Malloc disk", 00:11:04.800 "block_size": 512, 00:11:04.800 "num_blocks": 65536, 00:11:04.800 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:04.800 "assigned_rate_limits": { 00:11:04.800 "rw_ios_per_sec": 0, 00:11:04.800 "rw_mbytes_per_sec": 0, 00:11:04.800 "r_mbytes_per_sec": 0, 00:11:04.800 "w_mbytes_per_sec": 0 00:11:04.800 }, 00:11:04.800 "claimed": false, 00:11:04.800 "zoned": false, 00:11:04.800 "supported_io_types": { 00:11:04.800 "read": true, 00:11:04.800 "write": true, 00:11:04.800 "unmap": true, 00:11:04.800 "flush": true, 00:11:04.800 "reset": true, 00:11:04.800 "nvme_admin": false, 00:11:04.800 "nvme_io": false, 00:11:04.800 "nvme_io_md": false, 00:11:04.800 "write_zeroes": true, 00:11:04.800 "zcopy": true, 00:11:04.800 "get_zone_info": false, 00:11:04.800 "zone_management": false, 00:11:04.800 "zone_append": false, 00:11:04.800 "compare": false, 00:11:04.800 "compare_and_write": false, 00:11:04.800 "abort": true, 00:11:04.800 "seek_hole": false, 00:11:04.800 "seek_data": false, 00:11:04.800 "copy": true, 00:11:04.800 "nvme_iov_md": false 00:11:04.800 }, 00:11:04.800 "memory_domains": [ 00:11:04.800 { 00:11:04.800 "dma_device_id": "system", 00:11:04.800 "dma_device_type": 1 00:11:04.800 }, 00:11:04.800 { 00:11:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.800 "dma_device_type": 2 00:11:04.800 } 00:11:04.800 ], 00:11:04.800 "driver_specific": {} 00:11:04.800 } 00:11:04.800 ] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 BaseBdev4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 [ 00:11:04.800 { 00:11:04.800 "name": "BaseBdev4", 00:11:04.800 "aliases": [ 00:11:04.800 "2e3311d9-4958-4946-8dbc-15d7883e7e1e" 00:11:04.800 ], 00:11:04.800 "product_name": "Malloc disk", 00:11:04.800 "block_size": 512, 00:11:04.800 "num_blocks": 65536, 00:11:04.800 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:04.800 "assigned_rate_limits": { 00:11:04.800 "rw_ios_per_sec": 0, 00:11:04.800 "rw_mbytes_per_sec": 0, 00:11:04.800 "r_mbytes_per_sec": 0, 00:11:04.800 "w_mbytes_per_sec": 0 00:11:04.800 }, 00:11:04.800 "claimed": false, 00:11:04.800 "zoned": false, 00:11:04.800 "supported_io_types": { 00:11:04.800 "read": true, 00:11:04.800 "write": true, 00:11:04.800 "unmap": true, 00:11:04.800 "flush": true, 00:11:04.800 "reset": true, 00:11:04.800 "nvme_admin": false, 00:11:04.800 "nvme_io": false, 00:11:04.800 "nvme_io_md": false, 00:11:04.800 "write_zeroes": true, 00:11:04.800 "zcopy": true, 00:11:04.800 "get_zone_info": false, 00:11:04.800 "zone_management": false, 00:11:04.800 "zone_append": false, 00:11:04.800 "compare": false, 00:11:04.800 "compare_and_write": false, 00:11:04.800 "abort": true, 00:11:04.800 "seek_hole": false, 00:11:04.800 "seek_data": false, 00:11:04.800 "copy": true, 00:11:04.800 "nvme_iov_md": false 00:11:04.800 }, 00:11:04.800 "memory_domains": [ 00:11:04.800 { 00:11:04.800 "dma_device_id": "system", 00:11:04.800 "dma_device_type": 1 00:11:04.800 }, 00:11:04.800 { 00:11:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.800 "dma_device_type": 2 00:11:04.800 } 00:11:04.800 ], 00:11:04.800 "driver_specific": {} 00:11:04.800 } 00:11:04.800 ] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.800 [2024-11-04 11:43:30.270458] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.800 [2024-11-04 11:43:30.270504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.800 [2024-11-04 11:43:30.270527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.800 [2024-11-04 11:43:30.272687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.800 [2024-11-04 11:43:30.272750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.800 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.801 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.059 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.059 "name": "Existed_Raid", 00:11:05.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.059 "strip_size_kb": 64, 00:11:05.059 "state": "configuring", 00:11:05.059 "raid_level": "raid0", 00:11:05.059 "superblock": false, 00:11:05.059 "num_base_bdevs": 4, 00:11:05.059 "num_base_bdevs_discovered": 3, 00:11:05.059 "num_base_bdevs_operational": 4, 00:11:05.059 "base_bdevs_list": [ 00:11:05.059 { 00:11:05.059 "name": "BaseBdev1", 00:11:05.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.059 "is_configured": false, 00:11:05.059 "data_offset": 0, 00:11:05.059 "data_size": 0 00:11:05.059 }, 00:11:05.059 { 00:11:05.059 "name": "BaseBdev2", 00:11:05.059 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:05.059 "is_configured": true, 00:11:05.059 "data_offset": 0, 00:11:05.059 "data_size": 65536 00:11:05.059 }, 00:11:05.059 { 00:11:05.059 "name": "BaseBdev3", 00:11:05.059 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:05.059 "is_configured": true, 00:11:05.059 "data_offset": 0, 00:11:05.059 "data_size": 65536 00:11:05.059 }, 00:11:05.059 { 00:11:05.059 "name": "BaseBdev4", 00:11:05.059 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:05.060 "is_configured": true, 00:11:05.060 "data_offset": 0, 00:11:05.060 "data_size": 65536 00:11:05.060 } 00:11:05.060 ] 00:11:05.060 }' 00:11:05.060 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.060 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.319 [2024-11-04 11:43:30.721708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.319 "name": "Existed_Raid", 00:11:05.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.319 "strip_size_kb": 64, 00:11:05.319 "state": "configuring", 00:11:05.319 "raid_level": "raid0", 00:11:05.319 "superblock": false, 00:11:05.319 "num_base_bdevs": 4, 00:11:05.319 "num_base_bdevs_discovered": 2, 00:11:05.319 "num_base_bdevs_operational": 4, 00:11:05.319 "base_bdevs_list": [ 00:11:05.319 { 00:11:05.319 "name": "BaseBdev1", 00:11:05.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.319 "is_configured": false, 00:11:05.319 "data_offset": 0, 00:11:05.319 "data_size": 0 00:11:05.319 }, 00:11:05.319 { 00:11:05.319 "name": null, 00:11:05.319 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:05.319 "is_configured": false, 00:11:05.319 "data_offset": 0, 00:11:05.319 "data_size": 65536 00:11:05.319 }, 00:11:05.319 { 00:11:05.319 "name": "BaseBdev3", 00:11:05.319 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:05.319 "is_configured": true, 00:11:05.319 "data_offset": 0, 00:11:05.319 "data_size": 65536 00:11:05.319 }, 00:11:05.319 { 00:11:05.319 "name": "BaseBdev4", 00:11:05.319 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:05.319 "is_configured": true, 00:11:05.319 "data_offset": 0, 00:11:05.319 "data_size": 65536 00:11:05.319 } 00:11:05.319 ] 00:11:05.319 }' 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.319 11:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 [2024-11-04 11:43:31.258553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.888 BaseBdev1 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 [ 00:11:05.888 { 00:11:05.888 "name": "BaseBdev1", 00:11:05.888 "aliases": [ 00:11:05.888 "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172" 00:11:05.888 ], 00:11:05.888 "product_name": "Malloc disk", 00:11:05.888 "block_size": 512, 00:11:05.888 "num_blocks": 65536, 00:11:05.888 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:05.888 "assigned_rate_limits": { 00:11:05.888 "rw_ios_per_sec": 0, 00:11:05.888 "rw_mbytes_per_sec": 0, 00:11:05.888 "r_mbytes_per_sec": 0, 00:11:05.888 "w_mbytes_per_sec": 0 00:11:05.888 }, 00:11:05.888 "claimed": true, 00:11:05.888 "claim_type": "exclusive_write", 00:11:05.888 "zoned": false, 00:11:05.888 "supported_io_types": { 00:11:05.888 "read": true, 00:11:05.888 "write": true, 00:11:05.888 "unmap": true, 00:11:05.888 "flush": true, 00:11:05.888 "reset": true, 00:11:05.888 "nvme_admin": false, 00:11:05.888 "nvme_io": false, 00:11:05.888 "nvme_io_md": false, 00:11:05.888 "write_zeroes": true, 00:11:05.888 "zcopy": true, 00:11:05.888 "get_zone_info": false, 00:11:05.888 "zone_management": false, 00:11:05.888 "zone_append": false, 00:11:05.888 "compare": false, 00:11:05.888 "compare_and_write": false, 00:11:05.888 "abort": true, 00:11:05.888 "seek_hole": false, 00:11:05.888 "seek_data": false, 00:11:05.888 "copy": true, 00:11:05.888 "nvme_iov_md": false 00:11:05.888 }, 00:11:05.888 "memory_domains": [ 00:11:05.888 { 00:11:05.888 "dma_device_id": "system", 00:11:05.888 "dma_device_type": 1 00:11:05.888 }, 00:11:05.888 { 00:11:05.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.888 "dma_device_type": 2 00:11:05.888 } 00:11:05.888 ], 00:11:05.888 "driver_specific": {} 00:11:05.888 } 00:11:05.888 ] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.888 "name": "Existed_Raid", 00:11:05.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.888 "strip_size_kb": 64, 00:11:05.888 "state": "configuring", 00:11:05.888 "raid_level": "raid0", 00:11:05.888 "superblock": false, 00:11:05.888 "num_base_bdevs": 4, 00:11:05.888 "num_base_bdevs_discovered": 3, 00:11:05.888 "num_base_bdevs_operational": 4, 00:11:05.888 "base_bdevs_list": [ 00:11:05.888 { 00:11:05.888 "name": "BaseBdev1", 00:11:05.888 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:05.888 "is_configured": true, 00:11:05.888 "data_offset": 0, 00:11:05.888 "data_size": 65536 00:11:05.888 }, 00:11:05.888 { 00:11:05.888 "name": null, 00:11:05.888 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:05.888 "is_configured": false, 00:11:05.888 "data_offset": 0, 00:11:05.888 "data_size": 65536 00:11:05.888 }, 00:11:05.888 { 00:11:05.888 "name": "BaseBdev3", 00:11:05.888 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:05.888 "is_configured": true, 00:11:05.888 "data_offset": 0, 00:11:05.888 "data_size": 65536 00:11:05.888 }, 00:11:05.888 { 00:11:05.888 "name": "BaseBdev4", 00:11:05.888 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:05.888 "is_configured": true, 00:11:05.888 "data_offset": 0, 00:11:05.888 "data_size": 65536 00:11:05.888 } 00:11:05.888 ] 00:11:05.888 }' 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.888 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 [2024-11-04 11:43:31.805725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.456 "name": "Existed_Raid", 00:11:06.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.456 "strip_size_kb": 64, 00:11:06.456 "state": "configuring", 00:11:06.456 "raid_level": "raid0", 00:11:06.456 "superblock": false, 00:11:06.456 "num_base_bdevs": 4, 00:11:06.456 "num_base_bdevs_discovered": 2, 00:11:06.456 "num_base_bdevs_operational": 4, 00:11:06.456 "base_bdevs_list": [ 00:11:06.456 { 00:11:06.456 "name": "BaseBdev1", 00:11:06.456 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:06.456 "is_configured": true, 00:11:06.456 "data_offset": 0, 00:11:06.456 "data_size": 65536 00:11:06.456 }, 00:11:06.456 { 00:11:06.456 "name": null, 00:11:06.456 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:06.456 "is_configured": false, 00:11:06.456 "data_offset": 0, 00:11:06.456 "data_size": 65536 00:11:06.456 }, 00:11:06.456 { 00:11:06.456 "name": null, 00:11:06.456 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:06.456 "is_configured": false, 00:11:06.456 "data_offset": 0, 00:11:06.456 "data_size": 65536 00:11:06.456 }, 00:11:06.456 { 00:11:06.456 "name": "BaseBdev4", 00:11:06.456 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:06.456 "is_configured": true, 00:11:06.456 "data_offset": 0, 00:11:06.456 "data_size": 65536 00:11:06.456 } 00:11:06.456 ] 00:11:06.456 }' 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.456 11:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.024 [2024-11-04 11:43:32.300886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.024 "name": "Existed_Raid", 00:11:07.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.024 "strip_size_kb": 64, 00:11:07.024 "state": "configuring", 00:11:07.024 "raid_level": "raid0", 00:11:07.024 "superblock": false, 00:11:07.024 "num_base_bdevs": 4, 00:11:07.024 "num_base_bdevs_discovered": 3, 00:11:07.024 "num_base_bdevs_operational": 4, 00:11:07.024 "base_bdevs_list": [ 00:11:07.024 { 00:11:07.024 "name": "BaseBdev1", 00:11:07.024 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:07.024 "is_configured": true, 00:11:07.024 "data_offset": 0, 00:11:07.024 "data_size": 65536 00:11:07.024 }, 00:11:07.024 { 00:11:07.024 "name": null, 00:11:07.024 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:07.024 "is_configured": false, 00:11:07.024 "data_offset": 0, 00:11:07.024 "data_size": 65536 00:11:07.024 }, 00:11:07.024 { 00:11:07.024 "name": "BaseBdev3", 00:11:07.024 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:07.024 "is_configured": true, 00:11:07.024 "data_offset": 0, 00:11:07.024 "data_size": 65536 00:11:07.024 }, 00:11:07.024 { 00:11:07.024 "name": "BaseBdev4", 00:11:07.024 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:07.024 "is_configured": true, 00:11:07.024 "data_offset": 0, 00:11:07.024 "data_size": 65536 00:11:07.024 } 00:11:07.024 ] 00:11:07.024 }' 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.024 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.283 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.283 [2024-11-04 11:43:32.784131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.542 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.542 "name": "Existed_Raid", 00:11:07.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.542 "strip_size_kb": 64, 00:11:07.542 "state": "configuring", 00:11:07.542 "raid_level": "raid0", 00:11:07.542 "superblock": false, 00:11:07.542 "num_base_bdevs": 4, 00:11:07.542 "num_base_bdevs_discovered": 2, 00:11:07.542 "num_base_bdevs_operational": 4, 00:11:07.542 "base_bdevs_list": [ 00:11:07.542 { 00:11:07.542 "name": null, 00:11:07.542 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:07.542 "is_configured": false, 00:11:07.542 "data_offset": 0, 00:11:07.542 "data_size": 65536 00:11:07.542 }, 00:11:07.542 { 00:11:07.542 "name": null, 00:11:07.542 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:07.542 "is_configured": false, 00:11:07.543 "data_offset": 0, 00:11:07.543 "data_size": 65536 00:11:07.543 }, 00:11:07.543 { 00:11:07.543 "name": "BaseBdev3", 00:11:07.543 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:07.543 "is_configured": true, 00:11:07.543 "data_offset": 0, 00:11:07.543 "data_size": 65536 00:11:07.543 }, 00:11:07.543 { 00:11:07.543 "name": "BaseBdev4", 00:11:07.543 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:07.543 "is_configured": true, 00:11:07.543 "data_offset": 0, 00:11:07.543 "data_size": 65536 00:11:07.543 } 00:11:07.543 ] 00:11:07.543 }' 00:11:07.543 11:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.543 11:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.802 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.802 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.802 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.802 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.802 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.061 [2024-11-04 11:43:33.349606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.061 "name": "Existed_Raid", 00:11:08.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.061 "strip_size_kb": 64, 00:11:08.061 "state": "configuring", 00:11:08.061 "raid_level": "raid0", 00:11:08.061 "superblock": false, 00:11:08.061 "num_base_bdevs": 4, 00:11:08.061 "num_base_bdevs_discovered": 3, 00:11:08.061 "num_base_bdevs_operational": 4, 00:11:08.061 "base_bdevs_list": [ 00:11:08.061 { 00:11:08.061 "name": null, 00:11:08.061 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:08.061 "is_configured": false, 00:11:08.061 "data_offset": 0, 00:11:08.061 "data_size": 65536 00:11:08.061 }, 00:11:08.061 { 00:11:08.061 "name": "BaseBdev2", 00:11:08.061 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:08.061 "is_configured": true, 00:11:08.061 "data_offset": 0, 00:11:08.061 "data_size": 65536 00:11:08.061 }, 00:11:08.061 { 00:11:08.061 "name": "BaseBdev3", 00:11:08.061 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:08.061 "is_configured": true, 00:11:08.061 "data_offset": 0, 00:11:08.061 "data_size": 65536 00:11:08.061 }, 00:11:08.061 { 00:11:08.061 "name": "BaseBdev4", 00:11:08.061 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:08.061 "is_configured": true, 00:11:08.061 "data_offset": 0, 00:11:08.061 "data_size": 65536 00:11:08.061 } 00:11:08.061 ] 00:11:08.061 }' 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.061 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:08.350 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e5bd2d1-ceb4-4caa-a51a-ede9b2359172 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 [2024-11-04 11:43:33.923969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:08.630 [2024-11-04 11:43:33.924076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.630 [2024-11-04 11:43:33.924124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:08.630 [2024-11-04 11:43:33.924435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:08.630 [2024-11-04 11:43:33.924648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.630 [2024-11-04 11:43:33.924695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:08.630 [2024-11-04 11:43:33.925018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.630 NewBaseBdev 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 [ 00:11:08.630 { 00:11:08.630 "name": "NewBaseBdev", 00:11:08.630 "aliases": [ 00:11:08.630 "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172" 00:11:08.630 ], 00:11:08.630 "product_name": "Malloc disk", 00:11:08.630 "block_size": 512, 00:11:08.630 "num_blocks": 65536, 00:11:08.630 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:08.630 "assigned_rate_limits": { 00:11:08.630 "rw_ios_per_sec": 0, 00:11:08.630 "rw_mbytes_per_sec": 0, 00:11:08.630 "r_mbytes_per_sec": 0, 00:11:08.630 "w_mbytes_per_sec": 0 00:11:08.630 }, 00:11:08.630 "claimed": true, 00:11:08.630 "claim_type": "exclusive_write", 00:11:08.630 "zoned": false, 00:11:08.630 "supported_io_types": { 00:11:08.630 "read": true, 00:11:08.630 "write": true, 00:11:08.630 "unmap": true, 00:11:08.630 "flush": true, 00:11:08.630 "reset": true, 00:11:08.630 "nvme_admin": false, 00:11:08.630 "nvme_io": false, 00:11:08.630 "nvme_io_md": false, 00:11:08.630 "write_zeroes": true, 00:11:08.630 "zcopy": true, 00:11:08.630 "get_zone_info": false, 00:11:08.630 "zone_management": false, 00:11:08.630 "zone_append": false, 00:11:08.630 "compare": false, 00:11:08.630 "compare_and_write": false, 00:11:08.630 "abort": true, 00:11:08.630 "seek_hole": false, 00:11:08.630 "seek_data": false, 00:11:08.630 "copy": true, 00:11:08.630 "nvme_iov_md": false 00:11:08.630 }, 00:11:08.630 "memory_domains": [ 00:11:08.630 { 00:11:08.630 "dma_device_id": "system", 00:11:08.630 "dma_device_type": 1 00:11:08.630 }, 00:11:08.630 { 00:11:08.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.630 "dma_device_type": 2 00:11:08.630 } 00:11:08.630 ], 00:11:08.630 "driver_specific": {} 00:11:08.630 } 00:11:08.630 ] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 11:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.630 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.630 "name": "Existed_Raid", 00:11:08.630 "uuid": "7cbce027-f26d-4c7c-ae21-20f358c07557", 00:11:08.630 "strip_size_kb": 64, 00:11:08.630 "state": "online", 00:11:08.630 "raid_level": "raid0", 00:11:08.630 "superblock": false, 00:11:08.630 "num_base_bdevs": 4, 00:11:08.630 "num_base_bdevs_discovered": 4, 00:11:08.631 "num_base_bdevs_operational": 4, 00:11:08.631 "base_bdevs_list": [ 00:11:08.631 { 00:11:08.631 "name": "NewBaseBdev", 00:11:08.631 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:08.631 "is_configured": true, 00:11:08.631 "data_offset": 0, 00:11:08.631 "data_size": 65536 00:11:08.631 }, 00:11:08.631 { 00:11:08.631 "name": "BaseBdev2", 00:11:08.631 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:08.631 "is_configured": true, 00:11:08.631 "data_offset": 0, 00:11:08.631 "data_size": 65536 00:11:08.631 }, 00:11:08.631 { 00:11:08.631 "name": "BaseBdev3", 00:11:08.631 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:08.631 "is_configured": true, 00:11:08.631 "data_offset": 0, 00:11:08.631 "data_size": 65536 00:11:08.631 }, 00:11:08.631 { 00:11:08.631 "name": "BaseBdev4", 00:11:08.631 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:08.631 "is_configured": true, 00:11:08.631 "data_offset": 0, 00:11:08.631 "data_size": 65536 00:11:08.631 } 00:11:08.631 ] 00:11:08.631 }' 00:11:08.631 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.631 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 [2024-11-04 11:43:34.491468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.197 "name": "Existed_Raid", 00:11:09.197 "aliases": [ 00:11:09.197 "7cbce027-f26d-4c7c-ae21-20f358c07557" 00:11:09.197 ], 00:11:09.197 "product_name": "Raid Volume", 00:11:09.197 "block_size": 512, 00:11:09.197 "num_blocks": 262144, 00:11:09.197 "uuid": "7cbce027-f26d-4c7c-ae21-20f358c07557", 00:11:09.197 "assigned_rate_limits": { 00:11:09.197 "rw_ios_per_sec": 0, 00:11:09.197 "rw_mbytes_per_sec": 0, 00:11:09.197 "r_mbytes_per_sec": 0, 00:11:09.197 "w_mbytes_per_sec": 0 00:11:09.197 }, 00:11:09.197 "claimed": false, 00:11:09.197 "zoned": false, 00:11:09.197 "supported_io_types": { 00:11:09.197 "read": true, 00:11:09.197 "write": true, 00:11:09.197 "unmap": true, 00:11:09.197 "flush": true, 00:11:09.197 "reset": true, 00:11:09.197 "nvme_admin": false, 00:11:09.197 "nvme_io": false, 00:11:09.197 "nvme_io_md": false, 00:11:09.197 "write_zeroes": true, 00:11:09.197 "zcopy": false, 00:11:09.197 "get_zone_info": false, 00:11:09.197 "zone_management": false, 00:11:09.197 "zone_append": false, 00:11:09.197 "compare": false, 00:11:09.197 "compare_and_write": false, 00:11:09.197 "abort": false, 00:11:09.197 "seek_hole": false, 00:11:09.197 "seek_data": false, 00:11:09.197 "copy": false, 00:11:09.197 "nvme_iov_md": false 00:11:09.197 }, 00:11:09.197 "memory_domains": [ 00:11:09.197 { 00:11:09.197 "dma_device_id": "system", 00:11:09.197 "dma_device_type": 1 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.197 "dma_device_type": 2 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "system", 00:11:09.197 "dma_device_type": 1 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.197 "dma_device_type": 2 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "system", 00:11:09.197 "dma_device_type": 1 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.197 "dma_device_type": 2 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "system", 00:11:09.197 "dma_device_type": 1 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.197 "dma_device_type": 2 00:11:09.197 } 00:11:09.197 ], 00:11:09.197 "driver_specific": { 00:11:09.197 "raid": { 00:11:09.197 "uuid": "7cbce027-f26d-4c7c-ae21-20f358c07557", 00:11:09.197 "strip_size_kb": 64, 00:11:09.197 "state": "online", 00:11:09.197 "raid_level": "raid0", 00:11:09.197 "superblock": false, 00:11:09.197 "num_base_bdevs": 4, 00:11:09.197 "num_base_bdevs_discovered": 4, 00:11:09.197 "num_base_bdevs_operational": 4, 00:11:09.197 "base_bdevs_list": [ 00:11:09.197 { 00:11:09.197 "name": "NewBaseBdev", 00:11:09.197 "uuid": "1e5bd2d1-ceb4-4caa-a51a-ede9b2359172", 00:11:09.197 "is_configured": true, 00:11:09.197 "data_offset": 0, 00:11:09.197 "data_size": 65536 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "name": "BaseBdev2", 00:11:09.197 "uuid": "81c97e40-18a3-4085-8245-ce46ba539823", 00:11:09.197 "is_configured": true, 00:11:09.197 "data_offset": 0, 00:11:09.197 "data_size": 65536 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "name": "BaseBdev3", 00:11:09.197 "uuid": "d2bce5c2-d606-4ae3-a61c-ea3517cf2131", 00:11:09.197 "is_configured": true, 00:11:09.197 "data_offset": 0, 00:11:09.197 "data_size": 65536 00:11:09.197 }, 00:11:09.197 { 00:11:09.197 "name": "BaseBdev4", 00:11:09.197 "uuid": "2e3311d9-4958-4946-8dbc-15d7883e7e1e", 00:11:09.197 "is_configured": true, 00:11:09.197 "data_offset": 0, 00:11:09.197 "data_size": 65536 00:11:09.197 } 00:11:09.197 ] 00:11:09.197 } 00:11:09.197 } 00:11:09.197 }' 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.197 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.197 BaseBdev2 00:11:09.198 BaseBdev3 00:11:09.198 BaseBdev4' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.198 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.456 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.457 [2024-11-04 11:43:34.814534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.457 [2024-11-04 11:43:34.814602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.457 [2024-11-04 11:43:34.814706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.457 [2024-11-04 11:43:34.814789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.457 [2024-11-04 11:43:34.814836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69608 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69608 ']' 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69608 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69608 00:11:09.457 killing process with pid 69608 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69608' 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69608 00:11:09.457 [2024-11-04 11:43:34.855386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.457 11:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69608 00:11:10.023 [2024-11-04 11:43:35.267490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.961 11:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.961 00:11:10.961 real 0m11.752s 00:11:10.961 user 0m18.709s 00:11:10.961 sys 0m2.052s 00:11:10.961 ************************************ 00:11:10.961 END TEST raid_state_function_test 00:11:10.961 ************************************ 00:11:10.961 11:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.961 11:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.961 11:43:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:10.961 11:43:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:10.961 11:43:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.961 11:43:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.220 ************************************ 00:11:11.220 START TEST raid_state_function_test_sb 00:11:11.220 ************************************ 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:11.220 Process raid pid: 70279 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70279 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70279' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70279 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70279 ']' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.220 11:43:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.220 [2024-11-04 11:43:36.583877] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:11.220 [2024-11-04 11:43:36.583995] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.479 [2024-11-04 11:43:36.760625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.479 [2024-11-04 11:43:36.887082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.737 [2024-11-04 11:43:37.102185] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.738 [2024-11-04 11:43:37.102232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.996 [2024-11-04 11:43:37.483373] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.996 [2024-11-04 11:43:37.483442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.996 [2024-11-04 11:43:37.483454] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.996 [2024-11-04 11:43:37.483483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.996 [2024-11-04 11:43:37.483491] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.996 [2024-11-04 11:43:37.483501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.996 [2024-11-04 11:43:37.483509] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.996 [2024-11-04 11:43:37.483519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.996 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.256 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.256 "name": "Existed_Raid", 00:11:12.256 "uuid": "6d0a4d53-732c-4d86-8fcc-fc4e8a4d621c", 00:11:12.256 "strip_size_kb": 64, 00:11:12.256 "state": "configuring", 00:11:12.256 "raid_level": "raid0", 00:11:12.256 "superblock": true, 00:11:12.256 "num_base_bdevs": 4, 00:11:12.256 "num_base_bdevs_discovered": 0, 00:11:12.256 "num_base_bdevs_operational": 4, 00:11:12.256 "base_bdevs_list": [ 00:11:12.256 { 00:11:12.256 "name": "BaseBdev1", 00:11:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.256 "is_configured": false, 00:11:12.256 "data_offset": 0, 00:11:12.256 "data_size": 0 00:11:12.256 }, 00:11:12.256 { 00:11:12.256 "name": "BaseBdev2", 00:11:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.256 "is_configured": false, 00:11:12.256 "data_offset": 0, 00:11:12.256 "data_size": 0 00:11:12.256 }, 00:11:12.256 { 00:11:12.256 "name": "BaseBdev3", 00:11:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.256 "is_configured": false, 00:11:12.256 "data_offset": 0, 00:11:12.256 "data_size": 0 00:11:12.256 }, 00:11:12.256 { 00:11:12.256 "name": "BaseBdev4", 00:11:12.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.256 "is_configured": false, 00:11:12.256 "data_offset": 0, 00:11:12.256 "data_size": 0 00:11:12.256 } 00:11:12.256 ] 00:11:12.256 }' 00:11:12.256 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.256 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.515 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.515 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.515 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.515 [2024-11-04 11:43:37.946483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.516 [2024-11-04 11:43:37.946571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.516 [2024-11-04 11:43:37.954471] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:12.516 [2024-11-04 11:43:37.954556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:12.516 [2024-11-04 11:43:37.954608] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.516 [2024-11-04 11:43:37.954648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.516 [2024-11-04 11:43:37.954682] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.516 [2024-11-04 11:43:37.954724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.516 [2024-11-04 11:43:37.954753] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.516 [2024-11-04 11:43:37.954789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.516 [2024-11-04 11:43:37.996381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.516 BaseBdev1 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.516 11:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.516 [ 00:11:12.516 { 00:11:12.516 "name": "BaseBdev1", 00:11:12.516 "aliases": [ 00:11:12.516 "4508fb16-2184-4792-a2c0-ef70ae40c5f0" 00:11:12.516 ], 00:11:12.516 "product_name": "Malloc disk", 00:11:12.516 "block_size": 512, 00:11:12.516 "num_blocks": 65536, 00:11:12.516 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:12.516 "assigned_rate_limits": { 00:11:12.516 "rw_ios_per_sec": 0, 00:11:12.516 "rw_mbytes_per_sec": 0, 00:11:12.516 "r_mbytes_per_sec": 0, 00:11:12.516 "w_mbytes_per_sec": 0 00:11:12.516 }, 00:11:12.516 "claimed": true, 00:11:12.516 "claim_type": "exclusive_write", 00:11:12.516 "zoned": false, 00:11:12.516 "supported_io_types": { 00:11:12.516 "read": true, 00:11:12.516 "write": true, 00:11:12.516 "unmap": true, 00:11:12.516 "flush": true, 00:11:12.516 "reset": true, 00:11:12.516 "nvme_admin": false, 00:11:12.516 "nvme_io": false, 00:11:12.516 "nvme_io_md": false, 00:11:12.516 "write_zeroes": true, 00:11:12.516 "zcopy": true, 00:11:12.516 "get_zone_info": false, 00:11:12.516 "zone_management": false, 00:11:12.516 "zone_append": false, 00:11:12.516 "compare": false, 00:11:12.516 "compare_and_write": false, 00:11:12.516 "abort": true, 00:11:12.516 "seek_hole": false, 00:11:12.516 "seek_data": false, 00:11:12.516 "copy": true, 00:11:12.516 "nvme_iov_md": false 00:11:12.516 }, 00:11:12.516 "memory_domains": [ 00:11:12.516 { 00:11:12.516 "dma_device_id": "system", 00:11:12.516 "dma_device_type": 1 00:11:12.516 }, 00:11:12.516 { 00:11:12.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.516 "dma_device_type": 2 00:11:12.516 } 00:11:12.516 ], 00:11:12.516 "driver_specific": {} 00:11:12.516 } 00:11:12.516 ] 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.516 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.776 "name": "Existed_Raid", 00:11:12.776 "uuid": "235a445b-0ff1-40e6-9e30-8cebbb1a90eb", 00:11:12.776 "strip_size_kb": 64, 00:11:12.776 "state": "configuring", 00:11:12.776 "raid_level": "raid0", 00:11:12.776 "superblock": true, 00:11:12.776 "num_base_bdevs": 4, 00:11:12.776 "num_base_bdevs_discovered": 1, 00:11:12.776 "num_base_bdevs_operational": 4, 00:11:12.776 "base_bdevs_list": [ 00:11:12.776 { 00:11:12.776 "name": "BaseBdev1", 00:11:12.776 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:12.776 "is_configured": true, 00:11:12.776 "data_offset": 2048, 00:11:12.776 "data_size": 63488 00:11:12.776 }, 00:11:12.776 { 00:11:12.776 "name": "BaseBdev2", 00:11:12.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.776 "is_configured": false, 00:11:12.776 "data_offset": 0, 00:11:12.776 "data_size": 0 00:11:12.776 }, 00:11:12.776 { 00:11:12.776 "name": "BaseBdev3", 00:11:12.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.776 "is_configured": false, 00:11:12.776 "data_offset": 0, 00:11:12.776 "data_size": 0 00:11:12.776 }, 00:11:12.776 { 00:11:12.776 "name": "BaseBdev4", 00:11:12.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.776 "is_configured": false, 00:11:12.776 "data_offset": 0, 00:11:12.776 "data_size": 0 00:11:12.776 } 00:11:12.776 ] 00:11:12.776 }' 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.776 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.034 [2024-11-04 11:43:38.467689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.034 [2024-11-04 11:43:38.467753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.034 [2024-11-04 11:43:38.479735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.034 [2024-11-04 11:43:38.481921] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.034 [2024-11-04 11:43:38.482010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.034 [2024-11-04 11:43:38.482063] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.034 [2024-11-04 11:43:38.482110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.034 [2024-11-04 11:43:38.482140] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.034 [2024-11-04 11:43:38.482181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.034 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.035 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.035 "name": "Existed_Raid", 00:11:13.035 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:13.035 "strip_size_kb": 64, 00:11:13.035 "state": "configuring", 00:11:13.035 "raid_level": "raid0", 00:11:13.035 "superblock": true, 00:11:13.035 "num_base_bdevs": 4, 00:11:13.035 "num_base_bdevs_discovered": 1, 00:11:13.035 "num_base_bdevs_operational": 4, 00:11:13.035 "base_bdevs_list": [ 00:11:13.035 { 00:11:13.035 "name": "BaseBdev1", 00:11:13.035 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:13.035 "is_configured": true, 00:11:13.035 "data_offset": 2048, 00:11:13.035 "data_size": 63488 00:11:13.035 }, 00:11:13.035 { 00:11:13.035 "name": "BaseBdev2", 00:11:13.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.035 "is_configured": false, 00:11:13.035 "data_offset": 0, 00:11:13.035 "data_size": 0 00:11:13.035 }, 00:11:13.035 { 00:11:13.035 "name": "BaseBdev3", 00:11:13.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.035 "is_configured": false, 00:11:13.035 "data_offset": 0, 00:11:13.035 "data_size": 0 00:11:13.035 }, 00:11:13.035 { 00:11:13.035 "name": "BaseBdev4", 00:11:13.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.035 "is_configured": false, 00:11:13.035 "data_offset": 0, 00:11:13.035 "data_size": 0 00:11:13.035 } 00:11:13.035 ] 00:11:13.035 }' 00:11:13.035 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.035 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.603 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.603 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 [2024-11-04 11:43:38.950483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.604 BaseBdev2 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 [ 00:11:13.604 { 00:11:13.604 "name": "BaseBdev2", 00:11:13.604 "aliases": [ 00:11:13.604 "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a" 00:11:13.604 ], 00:11:13.604 "product_name": "Malloc disk", 00:11:13.604 "block_size": 512, 00:11:13.604 "num_blocks": 65536, 00:11:13.604 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:13.604 "assigned_rate_limits": { 00:11:13.604 "rw_ios_per_sec": 0, 00:11:13.604 "rw_mbytes_per_sec": 0, 00:11:13.604 "r_mbytes_per_sec": 0, 00:11:13.604 "w_mbytes_per_sec": 0 00:11:13.604 }, 00:11:13.604 "claimed": true, 00:11:13.604 "claim_type": "exclusive_write", 00:11:13.604 "zoned": false, 00:11:13.604 "supported_io_types": { 00:11:13.604 "read": true, 00:11:13.604 "write": true, 00:11:13.604 "unmap": true, 00:11:13.604 "flush": true, 00:11:13.604 "reset": true, 00:11:13.604 "nvme_admin": false, 00:11:13.604 "nvme_io": false, 00:11:13.604 "nvme_io_md": false, 00:11:13.604 "write_zeroes": true, 00:11:13.604 "zcopy": true, 00:11:13.604 "get_zone_info": false, 00:11:13.604 "zone_management": false, 00:11:13.604 "zone_append": false, 00:11:13.604 "compare": false, 00:11:13.604 "compare_and_write": false, 00:11:13.604 "abort": true, 00:11:13.604 "seek_hole": false, 00:11:13.604 "seek_data": false, 00:11:13.604 "copy": true, 00:11:13.604 "nvme_iov_md": false 00:11:13.604 }, 00:11:13.604 "memory_domains": [ 00:11:13.604 { 00:11:13.604 "dma_device_id": "system", 00:11:13.604 "dma_device_type": 1 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.604 "dma_device_type": 2 00:11:13.604 } 00:11:13.604 ], 00:11:13.604 "driver_specific": {} 00:11:13.604 } 00:11:13.604 ] 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.604 11:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.604 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.604 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.604 "name": "Existed_Raid", 00:11:13.604 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:13.604 "strip_size_kb": 64, 00:11:13.604 "state": "configuring", 00:11:13.604 "raid_level": "raid0", 00:11:13.604 "superblock": true, 00:11:13.604 "num_base_bdevs": 4, 00:11:13.604 "num_base_bdevs_discovered": 2, 00:11:13.604 "num_base_bdevs_operational": 4, 00:11:13.604 "base_bdevs_list": [ 00:11:13.604 { 00:11:13.604 "name": "BaseBdev1", 00:11:13.604 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:13.604 "is_configured": true, 00:11:13.604 "data_offset": 2048, 00:11:13.604 "data_size": 63488 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "name": "BaseBdev2", 00:11:13.604 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:13.604 "is_configured": true, 00:11:13.604 "data_offset": 2048, 00:11:13.604 "data_size": 63488 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "name": "BaseBdev3", 00:11:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.604 "is_configured": false, 00:11:13.604 "data_offset": 0, 00:11:13.604 "data_size": 0 00:11:13.604 }, 00:11:13.604 { 00:11:13.604 "name": "BaseBdev4", 00:11:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.604 "is_configured": false, 00:11:13.604 "data_offset": 0, 00:11:13.604 "data_size": 0 00:11:13.604 } 00:11:13.604 ] 00:11:13.604 }' 00:11:13.604 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.604 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.201 [2024-11-04 11:43:39.530008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.201 BaseBdev3 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.201 [ 00:11:14.201 { 00:11:14.201 "name": "BaseBdev3", 00:11:14.201 "aliases": [ 00:11:14.201 "a145d48c-d124-4202-8c4a-f6381b0f6eb8" 00:11:14.201 ], 00:11:14.201 "product_name": "Malloc disk", 00:11:14.201 "block_size": 512, 00:11:14.201 "num_blocks": 65536, 00:11:14.201 "uuid": "a145d48c-d124-4202-8c4a-f6381b0f6eb8", 00:11:14.201 "assigned_rate_limits": { 00:11:14.201 "rw_ios_per_sec": 0, 00:11:14.201 "rw_mbytes_per_sec": 0, 00:11:14.201 "r_mbytes_per_sec": 0, 00:11:14.201 "w_mbytes_per_sec": 0 00:11:14.201 }, 00:11:14.201 "claimed": true, 00:11:14.201 "claim_type": "exclusive_write", 00:11:14.201 "zoned": false, 00:11:14.201 "supported_io_types": { 00:11:14.201 "read": true, 00:11:14.201 "write": true, 00:11:14.201 "unmap": true, 00:11:14.201 "flush": true, 00:11:14.201 "reset": true, 00:11:14.201 "nvme_admin": false, 00:11:14.201 "nvme_io": false, 00:11:14.201 "nvme_io_md": false, 00:11:14.201 "write_zeroes": true, 00:11:14.201 "zcopy": true, 00:11:14.201 "get_zone_info": false, 00:11:14.201 "zone_management": false, 00:11:14.201 "zone_append": false, 00:11:14.201 "compare": false, 00:11:14.201 "compare_and_write": false, 00:11:14.201 "abort": true, 00:11:14.201 "seek_hole": false, 00:11:14.201 "seek_data": false, 00:11:14.201 "copy": true, 00:11:14.201 "nvme_iov_md": false 00:11:14.201 }, 00:11:14.201 "memory_domains": [ 00:11:14.201 { 00:11:14.201 "dma_device_id": "system", 00:11:14.201 "dma_device_type": 1 00:11:14.201 }, 00:11:14.201 { 00:11:14.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.201 "dma_device_type": 2 00:11:14.201 } 00:11:14.201 ], 00:11:14.201 "driver_specific": {} 00:11:14.201 } 00:11:14.201 ] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.201 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.201 "name": "Existed_Raid", 00:11:14.201 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:14.201 "strip_size_kb": 64, 00:11:14.201 "state": "configuring", 00:11:14.201 "raid_level": "raid0", 00:11:14.201 "superblock": true, 00:11:14.201 "num_base_bdevs": 4, 00:11:14.201 "num_base_bdevs_discovered": 3, 00:11:14.201 "num_base_bdevs_operational": 4, 00:11:14.201 "base_bdevs_list": [ 00:11:14.201 { 00:11:14.201 "name": "BaseBdev1", 00:11:14.201 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:14.201 "is_configured": true, 00:11:14.201 "data_offset": 2048, 00:11:14.201 "data_size": 63488 00:11:14.201 }, 00:11:14.201 { 00:11:14.201 "name": "BaseBdev2", 00:11:14.201 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:14.201 "is_configured": true, 00:11:14.201 "data_offset": 2048, 00:11:14.201 "data_size": 63488 00:11:14.201 }, 00:11:14.201 { 00:11:14.201 "name": "BaseBdev3", 00:11:14.201 "uuid": "a145d48c-d124-4202-8c4a-f6381b0f6eb8", 00:11:14.201 "is_configured": true, 00:11:14.201 "data_offset": 2048, 00:11:14.201 "data_size": 63488 00:11:14.202 }, 00:11:14.202 { 00:11:14.202 "name": "BaseBdev4", 00:11:14.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.202 "is_configured": false, 00:11:14.202 "data_offset": 0, 00:11:14.202 "data_size": 0 00:11:14.202 } 00:11:14.202 ] 00:11:14.202 }' 00:11:14.202 11:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.202 11:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 [2024-11-04 11:43:40.076433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.770 [2024-11-04 11:43:40.076713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.770 [2024-11-04 11:43:40.076736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:14.770 BaseBdev4 00:11:14.770 [2024-11-04 11:43:40.077038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:14.770 [2024-11-04 11:43:40.077207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.770 [2024-11-04 11:43:40.077224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:14.770 [2024-11-04 11:43:40.077378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 [ 00:11:14.770 { 00:11:14.770 "name": "BaseBdev4", 00:11:14.770 "aliases": [ 00:11:14.770 "469ad04b-6d22-4f2e-a562-6bb359c9dbed" 00:11:14.770 ], 00:11:14.770 "product_name": "Malloc disk", 00:11:14.770 "block_size": 512, 00:11:14.770 "num_blocks": 65536, 00:11:14.770 "uuid": "469ad04b-6d22-4f2e-a562-6bb359c9dbed", 00:11:14.770 "assigned_rate_limits": { 00:11:14.770 "rw_ios_per_sec": 0, 00:11:14.770 "rw_mbytes_per_sec": 0, 00:11:14.770 "r_mbytes_per_sec": 0, 00:11:14.770 "w_mbytes_per_sec": 0 00:11:14.770 }, 00:11:14.770 "claimed": true, 00:11:14.770 "claim_type": "exclusive_write", 00:11:14.770 "zoned": false, 00:11:14.770 "supported_io_types": { 00:11:14.770 "read": true, 00:11:14.770 "write": true, 00:11:14.770 "unmap": true, 00:11:14.770 "flush": true, 00:11:14.770 "reset": true, 00:11:14.770 "nvme_admin": false, 00:11:14.770 "nvme_io": false, 00:11:14.770 "nvme_io_md": false, 00:11:14.770 "write_zeroes": true, 00:11:14.770 "zcopy": true, 00:11:14.770 "get_zone_info": false, 00:11:14.770 "zone_management": false, 00:11:14.770 "zone_append": false, 00:11:14.770 "compare": false, 00:11:14.770 "compare_and_write": false, 00:11:14.770 "abort": true, 00:11:14.770 "seek_hole": false, 00:11:14.770 "seek_data": false, 00:11:14.770 "copy": true, 00:11:14.770 "nvme_iov_md": false 00:11:14.770 }, 00:11:14.770 "memory_domains": [ 00:11:14.770 { 00:11:14.770 "dma_device_id": "system", 00:11:14.770 "dma_device_type": 1 00:11:14.770 }, 00:11:14.770 { 00:11:14.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.770 "dma_device_type": 2 00:11:14.770 } 00:11:14.770 ], 00:11:14.770 "driver_specific": {} 00:11:14.770 } 00:11:14.770 ] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.770 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.770 "name": "Existed_Raid", 00:11:14.771 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:14.771 "strip_size_kb": 64, 00:11:14.771 "state": "online", 00:11:14.771 "raid_level": "raid0", 00:11:14.771 "superblock": true, 00:11:14.771 "num_base_bdevs": 4, 00:11:14.771 "num_base_bdevs_discovered": 4, 00:11:14.771 "num_base_bdevs_operational": 4, 00:11:14.771 "base_bdevs_list": [ 00:11:14.771 { 00:11:14.771 "name": "BaseBdev1", 00:11:14.771 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev2", 00:11:14.771 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev3", 00:11:14.771 "uuid": "a145d48c-d124-4202-8c4a-f6381b0f6eb8", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 }, 00:11:14.771 { 00:11:14.771 "name": "BaseBdev4", 00:11:14.771 "uuid": "469ad04b-6d22-4f2e-a562-6bb359c9dbed", 00:11:14.771 "is_configured": true, 00:11:14.771 "data_offset": 2048, 00:11:14.771 "data_size": 63488 00:11:14.771 } 00:11:14.771 ] 00:11:14.771 }' 00:11:14.771 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.771 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.339 [2024-11-04 11:43:40.600037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.339 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.339 "name": "Existed_Raid", 00:11:15.339 "aliases": [ 00:11:15.339 "d45d1a75-f303-4b19-8db7-a26c0512e762" 00:11:15.339 ], 00:11:15.339 "product_name": "Raid Volume", 00:11:15.339 "block_size": 512, 00:11:15.339 "num_blocks": 253952, 00:11:15.339 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:15.339 "assigned_rate_limits": { 00:11:15.339 "rw_ios_per_sec": 0, 00:11:15.339 "rw_mbytes_per_sec": 0, 00:11:15.339 "r_mbytes_per_sec": 0, 00:11:15.339 "w_mbytes_per_sec": 0 00:11:15.339 }, 00:11:15.339 "claimed": false, 00:11:15.339 "zoned": false, 00:11:15.339 "supported_io_types": { 00:11:15.339 "read": true, 00:11:15.339 "write": true, 00:11:15.339 "unmap": true, 00:11:15.339 "flush": true, 00:11:15.339 "reset": true, 00:11:15.339 "nvme_admin": false, 00:11:15.339 "nvme_io": false, 00:11:15.339 "nvme_io_md": false, 00:11:15.339 "write_zeroes": true, 00:11:15.339 "zcopy": false, 00:11:15.339 "get_zone_info": false, 00:11:15.339 "zone_management": false, 00:11:15.339 "zone_append": false, 00:11:15.339 "compare": false, 00:11:15.339 "compare_and_write": false, 00:11:15.339 "abort": false, 00:11:15.339 "seek_hole": false, 00:11:15.339 "seek_data": false, 00:11:15.339 "copy": false, 00:11:15.339 "nvme_iov_md": false 00:11:15.339 }, 00:11:15.339 "memory_domains": [ 00:11:15.339 { 00:11:15.339 "dma_device_id": "system", 00:11:15.339 "dma_device_type": 1 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.339 "dma_device_type": 2 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "system", 00:11:15.339 "dma_device_type": 1 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.339 "dma_device_type": 2 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "system", 00:11:15.339 "dma_device_type": 1 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.339 "dma_device_type": 2 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "system", 00:11:15.339 "dma_device_type": 1 00:11:15.339 }, 00:11:15.339 { 00:11:15.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.340 "dma_device_type": 2 00:11:15.340 } 00:11:15.340 ], 00:11:15.340 "driver_specific": { 00:11:15.340 "raid": { 00:11:15.340 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:15.340 "strip_size_kb": 64, 00:11:15.340 "state": "online", 00:11:15.340 "raid_level": "raid0", 00:11:15.340 "superblock": true, 00:11:15.340 "num_base_bdevs": 4, 00:11:15.340 "num_base_bdevs_discovered": 4, 00:11:15.340 "num_base_bdevs_operational": 4, 00:11:15.340 "base_bdevs_list": [ 00:11:15.340 { 00:11:15.340 "name": "BaseBdev1", 00:11:15.340 "uuid": "4508fb16-2184-4792-a2c0-ef70ae40c5f0", 00:11:15.340 "is_configured": true, 00:11:15.340 "data_offset": 2048, 00:11:15.340 "data_size": 63488 00:11:15.340 }, 00:11:15.340 { 00:11:15.340 "name": "BaseBdev2", 00:11:15.340 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:15.340 "is_configured": true, 00:11:15.340 "data_offset": 2048, 00:11:15.340 "data_size": 63488 00:11:15.340 }, 00:11:15.340 { 00:11:15.340 "name": "BaseBdev3", 00:11:15.340 "uuid": "a145d48c-d124-4202-8c4a-f6381b0f6eb8", 00:11:15.340 "is_configured": true, 00:11:15.340 "data_offset": 2048, 00:11:15.340 "data_size": 63488 00:11:15.340 }, 00:11:15.340 { 00:11:15.340 "name": "BaseBdev4", 00:11:15.340 "uuid": "469ad04b-6d22-4f2e-a562-6bb359c9dbed", 00:11:15.340 "is_configured": true, 00:11:15.340 "data_offset": 2048, 00:11:15.340 "data_size": 63488 00:11:15.340 } 00:11:15.340 ] 00:11:15.340 } 00:11:15.340 } 00:11:15.340 }' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.340 BaseBdev2 00:11:15.340 BaseBdev3 00:11:15.340 BaseBdev4' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.340 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.599 11:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.599 [2024-11-04 11:43:40.915237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.599 [2024-11-04 11:43:40.915278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.599 [2024-11-04 11:43:40.915331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.599 "name": "Existed_Raid", 00:11:15.599 "uuid": "d45d1a75-f303-4b19-8db7-a26c0512e762", 00:11:15.599 "strip_size_kb": 64, 00:11:15.599 "state": "offline", 00:11:15.599 "raid_level": "raid0", 00:11:15.599 "superblock": true, 00:11:15.599 "num_base_bdevs": 4, 00:11:15.599 "num_base_bdevs_discovered": 3, 00:11:15.599 "num_base_bdevs_operational": 3, 00:11:15.599 "base_bdevs_list": [ 00:11:15.599 { 00:11:15.599 "name": null, 00:11:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.599 "is_configured": false, 00:11:15.599 "data_offset": 0, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev2", 00:11:15.599 "uuid": "4ef3f74a-5d1e-4035-9385-a1f4ccbe148a", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev3", 00:11:15.599 "uuid": "a145d48c-d124-4202-8c4a-f6381b0f6eb8", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 }, 00:11:15.599 { 00:11:15.599 "name": "BaseBdev4", 00:11:15.599 "uuid": "469ad04b-6d22-4f2e-a562-6bb359c9dbed", 00:11:15.599 "is_configured": true, 00:11:15.599 "data_offset": 2048, 00:11:15.599 "data_size": 63488 00:11:15.599 } 00:11:15.599 ] 00:11:15.599 }' 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.599 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 [2024-11-04 11:43:41.498559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.167 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.167 [2024-11-04 11:43:41.663794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.425 [2024-11-04 11:43:41.825282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.425 [2024-11-04 11:43:41.825342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.425 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.684 11:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.684 BaseBdev2 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.684 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.684 [ 00:11:16.684 { 00:11:16.684 "name": "BaseBdev2", 00:11:16.684 "aliases": [ 00:11:16.684 "e8bc6d36-4257-4700-b860-6f97573eaa7d" 00:11:16.684 ], 00:11:16.684 "product_name": "Malloc disk", 00:11:16.684 "block_size": 512, 00:11:16.685 "num_blocks": 65536, 00:11:16.685 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:16.685 "assigned_rate_limits": { 00:11:16.685 "rw_ios_per_sec": 0, 00:11:16.685 "rw_mbytes_per_sec": 0, 00:11:16.685 "r_mbytes_per_sec": 0, 00:11:16.685 "w_mbytes_per_sec": 0 00:11:16.685 }, 00:11:16.685 "claimed": false, 00:11:16.685 "zoned": false, 00:11:16.685 "supported_io_types": { 00:11:16.685 "read": true, 00:11:16.685 "write": true, 00:11:16.685 "unmap": true, 00:11:16.685 "flush": true, 00:11:16.685 "reset": true, 00:11:16.685 "nvme_admin": false, 00:11:16.685 "nvme_io": false, 00:11:16.685 "nvme_io_md": false, 00:11:16.685 "write_zeroes": true, 00:11:16.685 "zcopy": true, 00:11:16.685 "get_zone_info": false, 00:11:16.685 "zone_management": false, 00:11:16.685 "zone_append": false, 00:11:16.685 "compare": false, 00:11:16.685 "compare_and_write": false, 00:11:16.685 "abort": true, 00:11:16.685 "seek_hole": false, 00:11:16.685 "seek_data": false, 00:11:16.685 "copy": true, 00:11:16.685 "nvme_iov_md": false 00:11:16.685 }, 00:11:16.685 "memory_domains": [ 00:11:16.685 { 00:11:16.685 "dma_device_id": "system", 00:11:16.685 "dma_device_type": 1 00:11:16.685 }, 00:11:16.685 { 00:11:16.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.685 "dma_device_type": 2 00:11:16.685 } 00:11:16.685 ], 00:11:16.685 "driver_specific": {} 00:11:16.685 } 00:11:16.685 ] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.685 BaseBdev3 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.685 [ 00:11:16.685 { 00:11:16.685 "name": "BaseBdev3", 00:11:16.685 "aliases": [ 00:11:16.685 "745492b2-8941-4fd8-babf-fe6f8fa60a5a" 00:11:16.685 ], 00:11:16.685 "product_name": "Malloc disk", 00:11:16.685 "block_size": 512, 00:11:16.685 "num_blocks": 65536, 00:11:16.685 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:16.685 "assigned_rate_limits": { 00:11:16.685 "rw_ios_per_sec": 0, 00:11:16.685 "rw_mbytes_per_sec": 0, 00:11:16.685 "r_mbytes_per_sec": 0, 00:11:16.685 "w_mbytes_per_sec": 0 00:11:16.685 }, 00:11:16.685 "claimed": false, 00:11:16.685 "zoned": false, 00:11:16.685 "supported_io_types": { 00:11:16.685 "read": true, 00:11:16.685 "write": true, 00:11:16.685 "unmap": true, 00:11:16.685 "flush": true, 00:11:16.685 "reset": true, 00:11:16.685 "nvme_admin": false, 00:11:16.685 "nvme_io": false, 00:11:16.685 "nvme_io_md": false, 00:11:16.685 "write_zeroes": true, 00:11:16.685 "zcopy": true, 00:11:16.685 "get_zone_info": false, 00:11:16.685 "zone_management": false, 00:11:16.685 "zone_append": false, 00:11:16.685 "compare": false, 00:11:16.685 "compare_and_write": false, 00:11:16.685 "abort": true, 00:11:16.685 "seek_hole": false, 00:11:16.685 "seek_data": false, 00:11:16.685 "copy": true, 00:11:16.685 "nvme_iov_md": false 00:11:16.685 }, 00:11:16.685 "memory_domains": [ 00:11:16.685 { 00:11:16.685 "dma_device_id": "system", 00:11:16.685 "dma_device_type": 1 00:11:16.685 }, 00:11:16.685 { 00:11:16.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.685 "dma_device_type": 2 00:11:16.685 } 00:11:16.685 ], 00:11:16.685 "driver_specific": {} 00:11:16.685 } 00:11:16.685 ] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.685 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.945 BaseBdev4 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.945 [ 00:11:16.945 { 00:11:16.945 "name": "BaseBdev4", 00:11:16.945 "aliases": [ 00:11:16.945 "e3459627-a3b9-4839-abf0-0275e08e069e" 00:11:16.945 ], 00:11:16.945 "product_name": "Malloc disk", 00:11:16.945 "block_size": 512, 00:11:16.945 "num_blocks": 65536, 00:11:16.945 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:16.945 "assigned_rate_limits": { 00:11:16.945 "rw_ios_per_sec": 0, 00:11:16.945 "rw_mbytes_per_sec": 0, 00:11:16.945 "r_mbytes_per_sec": 0, 00:11:16.945 "w_mbytes_per_sec": 0 00:11:16.945 }, 00:11:16.945 "claimed": false, 00:11:16.945 "zoned": false, 00:11:16.945 "supported_io_types": { 00:11:16.945 "read": true, 00:11:16.945 "write": true, 00:11:16.945 "unmap": true, 00:11:16.945 "flush": true, 00:11:16.945 "reset": true, 00:11:16.945 "nvme_admin": false, 00:11:16.945 "nvme_io": false, 00:11:16.945 "nvme_io_md": false, 00:11:16.945 "write_zeroes": true, 00:11:16.945 "zcopy": true, 00:11:16.945 "get_zone_info": false, 00:11:16.945 "zone_management": false, 00:11:16.945 "zone_append": false, 00:11:16.945 "compare": false, 00:11:16.945 "compare_and_write": false, 00:11:16.945 "abort": true, 00:11:16.945 "seek_hole": false, 00:11:16.945 "seek_data": false, 00:11:16.945 "copy": true, 00:11:16.945 "nvme_iov_md": false 00:11:16.945 }, 00:11:16.945 "memory_domains": [ 00:11:16.945 { 00:11:16.945 "dma_device_id": "system", 00:11:16.945 "dma_device_type": 1 00:11:16.945 }, 00:11:16.945 { 00:11:16.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.945 "dma_device_type": 2 00:11:16.945 } 00:11:16.945 ], 00:11:16.945 "driver_specific": {} 00:11:16.945 } 00:11:16.945 ] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.945 [2024-11-04 11:43:42.245801] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.945 [2024-11-04 11:43:42.245849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.945 [2024-11-04 11:43:42.245876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.945 [2024-11-04 11:43:42.247916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.945 [2024-11-04 11:43:42.247981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.945 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.945 "name": "Existed_Raid", 00:11:16.945 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:16.945 "strip_size_kb": 64, 00:11:16.945 "state": "configuring", 00:11:16.945 "raid_level": "raid0", 00:11:16.945 "superblock": true, 00:11:16.945 "num_base_bdevs": 4, 00:11:16.945 "num_base_bdevs_discovered": 3, 00:11:16.945 "num_base_bdevs_operational": 4, 00:11:16.945 "base_bdevs_list": [ 00:11:16.945 { 00:11:16.945 "name": "BaseBdev1", 00:11:16.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.946 "is_configured": false, 00:11:16.946 "data_offset": 0, 00:11:16.946 "data_size": 0 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev2", 00:11:16.946 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev3", 00:11:16.946 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 }, 00:11:16.946 { 00:11:16.946 "name": "BaseBdev4", 00:11:16.946 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:16.946 "is_configured": true, 00:11:16.946 "data_offset": 2048, 00:11:16.946 "data_size": 63488 00:11:16.946 } 00:11:16.946 ] 00:11:16.946 }' 00:11:16.946 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.946 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.205 [2024-11-04 11:43:42.701011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.205 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.465 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.465 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.465 "name": "Existed_Raid", 00:11:17.465 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:17.465 "strip_size_kb": 64, 00:11:17.465 "state": "configuring", 00:11:17.465 "raid_level": "raid0", 00:11:17.465 "superblock": true, 00:11:17.465 "num_base_bdevs": 4, 00:11:17.465 "num_base_bdevs_discovered": 2, 00:11:17.465 "num_base_bdevs_operational": 4, 00:11:17.465 "base_bdevs_list": [ 00:11:17.465 { 00:11:17.465 "name": "BaseBdev1", 00:11:17.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.465 "is_configured": false, 00:11:17.465 "data_offset": 0, 00:11:17.465 "data_size": 0 00:11:17.465 }, 00:11:17.465 { 00:11:17.465 "name": null, 00:11:17.465 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:17.465 "is_configured": false, 00:11:17.465 "data_offset": 0, 00:11:17.465 "data_size": 63488 00:11:17.465 }, 00:11:17.465 { 00:11:17.465 "name": "BaseBdev3", 00:11:17.465 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:17.465 "is_configured": true, 00:11:17.465 "data_offset": 2048, 00:11:17.465 "data_size": 63488 00:11:17.465 }, 00:11:17.465 { 00:11:17.465 "name": "BaseBdev4", 00:11:17.465 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:17.465 "is_configured": true, 00:11:17.465 "data_offset": 2048, 00:11:17.465 "data_size": 63488 00:11:17.465 } 00:11:17.465 ] 00:11:17.465 }' 00:11:17.465 11:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.465 11:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.725 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 [2024-11-04 11:43:43.249235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.984 BaseBdev1 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 [ 00:11:17.984 { 00:11:17.984 "name": "BaseBdev1", 00:11:17.984 "aliases": [ 00:11:17.984 "5678367f-b91b-4a90-b99d-827d45ad360d" 00:11:17.984 ], 00:11:17.984 "product_name": "Malloc disk", 00:11:17.984 "block_size": 512, 00:11:17.984 "num_blocks": 65536, 00:11:17.984 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:17.984 "assigned_rate_limits": { 00:11:17.984 "rw_ios_per_sec": 0, 00:11:17.984 "rw_mbytes_per_sec": 0, 00:11:17.984 "r_mbytes_per_sec": 0, 00:11:17.984 "w_mbytes_per_sec": 0 00:11:17.984 }, 00:11:17.984 "claimed": true, 00:11:17.984 "claim_type": "exclusive_write", 00:11:17.984 "zoned": false, 00:11:17.984 "supported_io_types": { 00:11:17.984 "read": true, 00:11:17.984 "write": true, 00:11:17.984 "unmap": true, 00:11:17.984 "flush": true, 00:11:17.984 "reset": true, 00:11:17.984 "nvme_admin": false, 00:11:17.984 "nvme_io": false, 00:11:17.984 "nvme_io_md": false, 00:11:17.984 "write_zeroes": true, 00:11:17.984 "zcopy": true, 00:11:17.984 "get_zone_info": false, 00:11:17.984 "zone_management": false, 00:11:17.984 "zone_append": false, 00:11:17.984 "compare": false, 00:11:17.984 "compare_and_write": false, 00:11:17.984 "abort": true, 00:11:17.984 "seek_hole": false, 00:11:17.984 "seek_data": false, 00:11:17.984 "copy": true, 00:11:17.984 "nvme_iov_md": false 00:11:17.984 }, 00:11:17.984 "memory_domains": [ 00:11:17.984 { 00:11:17.984 "dma_device_id": "system", 00:11:17.984 "dma_device_type": 1 00:11:17.984 }, 00:11:17.984 { 00:11:17.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.984 "dma_device_type": 2 00:11:17.984 } 00:11:17.984 ], 00:11:17.984 "driver_specific": {} 00:11:17.984 } 00:11:17.984 ] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.984 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.984 "name": "Existed_Raid", 00:11:17.984 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:17.984 "strip_size_kb": 64, 00:11:17.984 "state": "configuring", 00:11:17.984 "raid_level": "raid0", 00:11:17.984 "superblock": true, 00:11:17.984 "num_base_bdevs": 4, 00:11:17.984 "num_base_bdevs_discovered": 3, 00:11:17.984 "num_base_bdevs_operational": 4, 00:11:17.984 "base_bdevs_list": [ 00:11:17.984 { 00:11:17.984 "name": "BaseBdev1", 00:11:17.984 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:17.984 "is_configured": true, 00:11:17.984 "data_offset": 2048, 00:11:17.984 "data_size": 63488 00:11:17.984 }, 00:11:17.984 { 00:11:17.984 "name": null, 00:11:17.984 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:17.984 "is_configured": false, 00:11:17.984 "data_offset": 0, 00:11:17.984 "data_size": 63488 00:11:17.984 }, 00:11:17.984 { 00:11:17.985 "name": "BaseBdev3", 00:11:17.985 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:17.985 "is_configured": true, 00:11:17.985 "data_offset": 2048, 00:11:17.985 "data_size": 63488 00:11:17.985 }, 00:11:17.985 { 00:11:17.985 "name": "BaseBdev4", 00:11:17.985 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:17.985 "is_configured": true, 00:11:17.985 "data_offset": 2048, 00:11:17.985 "data_size": 63488 00:11:17.985 } 00:11:17.985 ] 00:11:17.985 }' 00:11:17.985 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.985 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.243 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.243 [2024-11-04 11:43:43.764462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.503 "name": "Existed_Raid", 00:11:18.503 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:18.503 "strip_size_kb": 64, 00:11:18.503 "state": "configuring", 00:11:18.503 "raid_level": "raid0", 00:11:18.503 "superblock": true, 00:11:18.503 "num_base_bdevs": 4, 00:11:18.503 "num_base_bdevs_discovered": 2, 00:11:18.503 "num_base_bdevs_operational": 4, 00:11:18.503 "base_bdevs_list": [ 00:11:18.503 { 00:11:18.503 "name": "BaseBdev1", 00:11:18.503 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:18.503 "is_configured": true, 00:11:18.503 "data_offset": 2048, 00:11:18.503 "data_size": 63488 00:11:18.503 }, 00:11:18.503 { 00:11:18.503 "name": null, 00:11:18.503 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:18.503 "is_configured": false, 00:11:18.503 "data_offset": 0, 00:11:18.503 "data_size": 63488 00:11:18.503 }, 00:11:18.503 { 00:11:18.503 "name": null, 00:11:18.503 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:18.503 "is_configured": false, 00:11:18.503 "data_offset": 0, 00:11:18.503 "data_size": 63488 00:11:18.503 }, 00:11:18.503 { 00:11:18.503 "name": "BaseBdev4", 00:11:18.503 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:18.503 "is_configured": true, 00:11:18.503 "data_offset": 2048, 00:11:18.503 "data_size": 63488 00:11:18.503 } 00:11:18.503 ] 00:11:18.503 }' 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.503 11:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 [2024-11-04 11:43:44.255628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.763 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.028 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.028 "name": "Existed_Raid", 00:11:19.028 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:19.028 "strip_size_kb": 64, 00:11:19.028 "state": "configuring", 00:11:19.028 "raid_level": "raid0", 00:11:19.028 "superblock": true, 00:11:19.028 "num_base_bdevs": 4, 00:11:19.028 "num_base_bdevs_discovered": 3, 00:11:19.028 "num_base_bdevs_operational": 4, 00:11:19.028 "base_bdevs_list": [ 00:11:19.028 { 00:11:19.028 "name": "BaseBdev1", 00:11:19.028 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:19.028 "is_configured": true, 00:11:19.028 "data_offset": 2048, 00:11:19.028 "data_size": 63488 00:11:19.028 }, 00:11:19.028 { 00:11:19.028 "name": null, 00:11:19.028 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:19.028 "is_configured": false, 00:11:19.028 "data_offset": 0, 00:11:19.028 "data_size": 63488 00:11:19.028 }, 00:11:19.028 { 00:11:19.028 "name": "BaseBdev3", 00:11:19.028 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:19.028 "is_configured": true, 00:11:19.028 "data_offset": 2048, 00:11:19.028 "data_size": 63488 00:11:19.028 }, 00:11:19.028 { 00:11:19.028 "name": "BaseBdev4", 00:11:19.028 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:19.028 "is_configured": true, 00:11:19.028 "data_offset": 2048, 00:11:19.028 "data_size": 63488 00:11:19.028 } 00:11:19.028 ] 00:11:19.028 }' 00:11:19.028 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.028 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.296 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.296 [2024-11-04 11:43:44.750812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.555 "name": "Existed_Raid", 00:11:19.555 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:19.555 "strip_size_kb": 64, 00:11:19.555 "state": "configuring", 00:11:19.555 "raid_level": "raid0", 00:11:19.555 "superblock": true, 00:11:19.555 "num_base_bdevs": 4, 00:11:19.555 "num_base_bdevs_discovered": 2, 00:11:19.555 "num_base_bdevs_operational": 4, 00:11:19.555 "base_bdevs_list": [ 00:11:19.555 { 00:11:19.555 "name": null, 00:11:19.555 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:19.555 "is_configured": false, 00:11:19.555 "data_offset": 0, 00:11:19.555 "data_size": 63488 00:11:19.555 }, 00:11:19.555 { 00:11:19.555 "name": null, 00:11:19.555 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:19.555 "is_configured": false, 00:11:19.555 "data_offset": 0, 00:11:19.555 "data_size": 63488 00:11:19.555 }, 00:11:19.555 { 00:11:19.555 "name": "BaseBdev3", 00:11:19.555 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:19.555 "is_configured": true, 00:11:19.555 "data_offset": 2048, 00:11:19.555 "data_size": 63488 00:11:19.555 }, 00:11:19.555 { 00:11:19.555 "name": "BaseBdev4", 00:11:19.555 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:19.555 "is_configured": true, 00:11:19.555 "data_offset": 2048, 00:11:19.555 "data_size": 63488 00:11:19.555 } 00:11:19.555 ] 00:11:19.555 }' 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.555 11:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.814 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.814 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.814 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.814 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.072 [2024-11-04 11:43:45.371615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.072 "name": "Existed_Raid", 00:11:20.072 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:20.072 "strip_size_kb": 64, 00:11:20.072 "state": "configuring", 00:11:20.072 "raid_level": "raid0", 00:11:20.072 "superblock": true, 00:11:20.072 "num_base_bdevs": 4, 00:11:20.072 "num_base_bdevs_discovered": 3, 00:11:20.072 "num_base_bdevs_operational": 4, 00:11:20.072 "base_bdevs_list": [ 00:11:20.072 { 00:11:20.072 "name": null, 00:11:20.072 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:20.072 "is_configured": false, 00:11:20.072 "data_offset": 0, 00:11:20.072 "data_size": 63488 00:11:20.072 }, 00:11:20.072 { 00:11:20.072 "name": "BaseBdev2", 00:11:20.072 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:20.072 "is_configured": true, 00:11:20.072 "data_offset": 2048, 00:11:20.072 "data_size": 63488 00:11:20.072 }, 00:11:20.072 { 00:11:20.072 "name": "BaseBdev3", 00:11:20.072 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:20.072 "is_configured": true, 00:11:20.072 "data_offset": 2048, 00:11:20.072 "data_size": 63488 00:11:20.072 }, 00:11:20.072 { 00:11:20.072 "name": "BaseBdev4", 00:11:20.072 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:20.072 "is_configured": true, 00:11:20.072 "data_offset": 2048, 00:11:20.072 "data_size": 63488 00:11:20.072 } 00:11:20.072 ] 00:11:20.072 }' 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.072 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5678367f-b91b-4a90-b99d-827d45ad360d 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 [2024-11-04 11:43:45.982909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.640 [2024-11-04 11:43:45.983160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.640 [2024-11-04 11:43:45.983173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.640 [2024-11-04 11:43:45.983503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:20.640 [2024-11-04 11:43:45.983682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.640 [2024-11-04 11:43:45.983703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:20.640 [2024-11-04 11:43:45.983853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.640 NewBaseBdev 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 [ 00:11:20.640 { 00:11:20.640 "name": "NewBaseBdev", 00:11:20.640 "aliases": [ 00:11:20.640 "5678367f-b91b-4a90-b99d-827d45ad360d" 00:11:20.640 ], 00:11:20.640 "product_name": "Malloc disk", 00:11:20.640 "block_size": 512, 00:11:20.640 "num_blocks": 65536, 00:11:20.640 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:20.640 "assigned_rate_limits": { 00:11:20.640 "rw_ios_per_sec": 0, 00:11:20.640 "rw_mbytes_per_sec": 0, 00:11:20.640 "r_mbytes_per_sec": 0, 00:11:20.640 "w_mbytes_per_sec": 0 00:11:20.640 }, 00:11:20.640 "claimed": true, 00:11:20.640 "claim_type": "exclusive_write", 00:11:20.640 "zoned": false, 00:11:20.640 "supported_io_types": { 00:11:20.640 "read": true, 00:11:20.640 "write": true, 00:11:20.640 "unmap": true, 00:11:20.640 "flush": true, 00:11:20.640 "reset": true, 00:11:20.640 "nvme_admin": false, 00:11:20.640 "nvme_io": false, 00:11:20.640 "nvme_io_md": false, 00:11:20.640 "write_zeroes": true, 00:11:20.640 "zcopy": true, 00:11:20.640 "get_zone_info": false, 00:11:20.640 "zone_management": false, 00:11:20.640 "zone_append": false, 00:11:20.640 "compare": false, 00:11:20.640 "compare_and_write": false, 00:11:20.640 "abort": true, 00:11:20.640 "seek_hole": false, 00:11:20.640 "seek_data": false, 00:11:20.640 "copy": true, 00:11:20.640 "nvme_iov_md": false 00:11:20.640 }, 00:11:20.640 "memory_domains": [ 00:11:20.640 { 00:11:20.640 "dma_device_id": "system", 00:11:20.640 "dma_device_type": 1 00:11:20.640 }, 00:11:20.640 { 00:11:20.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.640 "dma_device_type": 2 00:11:20.640 } 00:11:20.640 ], 00:11:20.640 "driver_specific": {} 00:11:20.640 } 00:11:20.640 ] 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.640 "name": "Existed_Raid", 00:11:20.640 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:20.640 "strip_size_kb": 64, 00:11:20.640 "state": "online", 00:11:20.640 "raid_level": "raid0", 00:11:20.640 "superblock": true, 00:11:20.640 "num_base_bdevs": 4, 00:11:20.640 "num_base_bdevs_discovered": 4, 00:11:20.640 "num_base_bdevs_operational": 4, 00:11:20.640 "base_bdevs_list": [ 00:11:20.640 { 00:11:20.640 "name": "NewBaseBdev", 00:11:20.640 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:20.640 "is_configured": true, 00:11:20.640 "data_offset": 2048, 00:11:20.640 "data_size": 63488 00:11:20.640 }, 00:11:20.640 { 00:11:20.640 "name": "BaseBdev2", 00:11:20.640 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:20.640 "is_configured": true, 00:11:20.640 "data_offset": 2048, 00:11:20.640 "data_size": 63488 00:11:20.640 }, 00:11:20.640 { 00:11:20.640 "name": "BaseBdev3", 00:11:20.640 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:20.640 "is_configured": true, 00:11:20.640 "data_offset": 2048, 00:11:20.640 "data_size": 63488 00:11:20.640 }, 00:11:20.640 { 00:11:20.640 "name": "BaseBdev4", 00:11:20.640 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:20.640 "is_configured": true, 00:11:20.640 "data_offset": 2048, 00:11:20.640 "data_size": 63488 00:11:20.640 } 00:11:20.640 ] 00:11:20.640 }' 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.640 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.208 [2024-11-04 11:43:46.518533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.208 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.209 "name": "Existed_Raid", 00:11:21.209 "aliases": [ 00:11:21.209 "32cd67cc-1769-49ff-bb28-d00fcfccd75b" 00:11:21.209 ], 00:11:21.209 "product_name": "Raid Volume", 00:11:21.209 "block_size": 512, 00:11:21.209 "num_blocks": 253952, 00:11:21.209 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:21.209 "assigned_rate_limits": { 00:11:21.209 "rw_ios_per_sec": 0, 00:11:21.209 "rw_mbytes_per_sec": 0, 00:11:21.209 "r_mbytes_per_sec": 0, 00:11:21.209 "w_mbytes_per_sec": 0 00:11:21.209 }, 00:11:21.209 "claimed": false, 00:11:21.209 "zoned": false, 00:11:21.209 "supported_io_types": { 00:11:21.209 "read": true, 00:11:21.209 "write": true, 00:11:21.209 "unmap": true, 00:11:21.209 "flush": true, 00:11:21.209 "reset": true, 00:11:21.209 "nvme_admin": false, 00:11:21.209 "nvme_io": false, 00:11:21.209 "nvme_io_md": false, 00:11:21.209 "write_zeroes": true, 00:11:21.209 "zcopy": false, 00:11:21.209 "get_zone_info": false, 00:11:21.209 "zone_management": false, 00:11:21.209 "zone_append": false, 00:11:21.209 "compare": false, 00:11:21.209 "compare_and_write": false, 00:11:21.209 "abort": false, 00:11:21.209 "seek_hole": false, 00:11:21.209 "seek_data": false, 00:11:21.209 "copy": false, 00:11:21.209 "nvme_iov_md": false 00:11:21.209 }, 00:11:21.209 "memory_domains": [ 00:11:21.209 { 00:11:21.209 "dma_device_id": "system", 00:11:21.209 "dma_device_type": 1 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.209 "dma_device_type": 2 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "system", 00:11:21.209 "dma_device_type": 1 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.209 "dma_device_type": 2 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "system", 00:11:21.209 "dma_device_type": 1 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.209 "dma_device_type": 2 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "system", 00:11:21.209 "dma_device_type": 1 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.209 "dma_device_type": 2 00:11:21.209 } 00:11:21.209 ], 00:11:21.209 "driver_specific": { 00:11:21.209 "raid": { 00:11:21.209 "uuid": "32cd67cc-1769-49ff-bb28-d00fcfccd75b", 00:11:21.209 "strip_size_kb": 64, 00:11:21.209 "state": "online", 00:11:21.209 "raid_level": "raid0", 00:11:21.209 "superblock": true, 00:11:21.209 "num_base_bdevs": 4, 00:11:21.209 "num_base_bdevs_discovered": 4, 00:11:21.209 "num_base_bdevs_operational": 4, 00:11:21.209 "base_bdevs_list": [ 00:11:21.209 { 00:11:21.209 "name": "NewBaseBdev", 00:11:21.209 "uuid": "5678367f-b91b-4a90-b99d-827d45ad360d", 00:11:21.209 "is_configured": true, 00:11:21.209 "data_offset": 2048, 00:11:21.209 "data_size": 63488 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "name": "BaseBdev2", 00:11:21.209 "uuid": "e8bc6d36-4257-4700-b860-6f97573eaa7d", 00:11:21.209 "is_configured": true, 00:11:21.209 "data_offset": 2048, 00:11:21.209 "data_size": 63488 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "name": "BaseBdev3", 00:11:21.209 "uuid": "745492b2-8941-4fd8-babf-fe6f8fa60a5a", 00:11:21.209 "is_configured": true, 00:11:21.209 "data_offset": 2048, 00:11:21.209 "data_size": 63488 00:11:21.209 }, 00:11:21.209 { 00:11:21.209 "name": "BaseBdev4", 00:11:21.209 "uuid": "e3459627-a3b9-4839-abf0-0275e08e069e", 00:11:21.209 "is_configured": true, 00:11:21.209 "data_offset": 2048, 00:11:21.209 "data_size": 63488 00:11:21.209 } 00:11:21.209 ] 00:11:21.209 } 00:11:21.209 } 00:11:21.209 }' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.209 BaseBdev2 00:11:21.209 BaseBdev3 00:11:21.209 BaseBdev4' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.209 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 [2024-11-04 11:43:46.865534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.468 [2024-11-04 11:43:46.865574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.468 [2024-11-04 11:43:46.865667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.468 [2024-11-04 11:43:46.865743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.468 [2024-11-04 11:43:46.865755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70279 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70279 ']' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70279 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70279 00:11:21.468 killing process with pid 70279 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70279' 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70279 00:11:21.468 [2024-11-04 11:43:46.911044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.468 11:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70279 00:11:22.034 [2024-11-04 11:43:47.344850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.438 11:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.438 00:11:23.438 real 0m12.071s 00:11:23.438 user 0m19.123s 00:11:23.438 sys 0m2.139s 00:11:23.438 11:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.438 11:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 ************************************ 00:11:23.438 END TEST raid_state_function_test_sb 00:11:23.438 ************************************ 00:11:23.438 11:43:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:23.438 11:43:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:23.438 11:43:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.438 11:43:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 ************************************ 00:11:23.438 START TEST raid_superblock_test 00:11:23.438 ************************************ 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70958 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70958 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70958 ']' 00:11:23.438 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.439 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.439 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.439 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.439 11:43:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.439 [2024-11-04 11:43:48.716736] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:23.439 [2024-11-04 11:43:48.716912] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70958 ] 00:11:23.439 [2024-11-04 11:43:48.890064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.698 [2024-11-04 11:43:49.012531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.956 [2024-11-04 11:43:49.228944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.956 [2024-11-04 11:43:49.229016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 malloc1 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 [2024-11-04 11:43:49.649614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:24.218 [2024-11-04 11:43:49.649681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.218 [2024-11-04 11:43:49.649709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.218 [2024-11-04 11:43:49.649719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.218 [2024-11-04 11:43:49.652114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.218 [2024-11-04 11:43:49.652151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:24.218 pt1 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:24.218 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.219 malloc2 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.219 [2024-11-04 11:43:49.707360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.219 [2024-11-04 11:43:49.707441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.219 [2024-11-04 11:43:49.707466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:24.219 [2024-11-04 11:43:49.707476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.219 [2024-11-04 11:43:49.709829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.219 [2024-11-04 11:43:49.709863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.219 pt2 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:24.219 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.220 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.481 malloc3 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.481 [2024-11-04 11:43:49.782502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.481 [2024-11-04 11:43:49.782554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.481 [2024-11-04 11:43:49.782576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:24.481 [2024-11-04 11:43:49.782585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.481 [2024-11-04 11:43:49.784921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.481 [2024-11-04 11:43:49.784958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.481 pt3 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.481 malloc4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.481 [2024-11-04 11:43:49.843768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.481 [2024-11-04 11:43:49.843825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.481 [2024-11-04 11:43:49.843845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:24.481 [2024-11-04 11:43:49.843855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.481 [2024-11-04 11:43:49.846160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.481 [2024-11-04 11:43:49.846212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.481 pt4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.481 [2024-11-04 11:43:49.855785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:24.481 [2024-11-04 11:43:49.857904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.481 [2024-11-04 11:43:49.857985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.481 [2024-11-04 11:43:49.858060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.481 [2024-11-04 11:43:49.858297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.481 [2024-11-04 11:43:49.858321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.481 [2024-11-04 11:43:49.858658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.481 [2024-11-04 11:43:49.858876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.481 [2024-11-04 11:43:49.858901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.481 [2024-11-04 11:43:49.859082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.481 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.482 "name": "raid_bdev1", 00:11:24.482 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:24.482 "strip_size_kb": 64, 00:11:24.482 "state": "online", 00:11:24.482 "raid_level": "raid0", 00:11:24.482 "superblock": true, 00:11:24.482 "num_base_bdevs": 4, 00:11:24.482 "num_base_bdevs_discovered": 4, 00:11:24.482 "num_base_bdevs_operational": 4, 00:11:24.482 "base_bdevs_list": [ 00:11:24.482 { 00:11:24.482 "name": "pt1", 00:11:24.482 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.482 "is_configured": true, 00:11:24.482 "data_offset": 2048, 00:11:24.482 "data_size": 63488 00:11:24.482 }, 00:11:24.482 { 00:11:24.482 "name": "pt2", 00:11:24.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.482 "is_configured": true, 00:11:24.482 "data_offset": 2048, 00:11:24.482 "data_size": 63488 00:11:24.482 }, 00:11:24.482 { 00:11:24.482 "name": "pt3", 00:11:24.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.482 "is_configured": true, 00:11:24.482 "data_offset": 2048, 00:11:24.482 "data_size": 63488 00:11:24.482 }, 00:11:24.482 { 00:11:24.482 "name": "pt4", 00:11:24.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.482 "is_configured": true, 00:11:24.482 "data_offset": 2048, 00:11:24.482 "data_size": 63488 00:11:24.482 } 00:11:24.482 ] 00:11:24.482 }' 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.482 11:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.050 [2024-11-04 11:43:50.363327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.050 "name": "raid_bdev1", 00:11:25.050 "aliases": [ 00:11:25.050 "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4" 00:11:25.050 ], 00:11:25.050 "product_name": "Raid Volume", 00:11:25.050 "block_size": 512, 00:11:25.050 "num_blocks": 253952, 00:11:25.050 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:25.050 "assigned_rate_limits": { 00:11:25.050 "rw_ios_per_sec": 0, 00:11:25.050 "rw_mbytes_per_sec": 0, 00:11:25.050 "r_mbytes_per_sec": 0, 00:11:25.050 "w_mbytes_per_sec": 0 00:11:25.050 }, 00:11:25.050 "claimed": false, 00:11:25.050 "zoned": false, 00:11:25.050 "supported_io_types": { 00:11:25.050 "read": true, 00:11:25.050 "write": true, 00:11:25.050 "unmap": true, 00:11:25.050 "flush": true, 00:11:25.050 "reset": true, 00:11:25.050 "nvme_admin": false, 00:11:25.050 "nvme_io": false, 00:11:25.050 "nvme_io_md": false, 00:11:25.050 "write_zeroes": true, 00:11:25.050 "zcopy": false, 00:11:25.050 "get_zone_info": false, 00:11:25.050 "zone_management": false, 00:11:25.050 "zone_append": false, 00:11:25.050 "compare": false, 00:11:25.050 "compare_and_write": false, 00:11:25.050 "abort": false, 00:11:25.050 "seek_hole": false, 00:11:25.050 "seek_data": false, 00:11:25.050 "copy": false, 00:11:25.050 "nvme_iov_md": false 00:11:25.050 }, 00:11:25.050 "memory_domains": [ 00:11:25.050 { 00:11:25.050 "dma_device_id": "system", 00:11:25.050 "dma_device_type": 1 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.050 "dma_device_type": 2 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "system", 00:11:25.050 "dma_device_type": 1 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.050 "dma_device_type": 2 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "system", 00:11:25.050 "dma_device_type": 1 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.050 "dma_device_type": 2 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "system", 00:11:25.050 "dma_device_type": 1 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.050 "dma_device_type": 2 00:11:25.050 } 00:11:25.050 ], 00:11:25.050 "driver_specific": { 00:11:25.050 "raid": { 00:11:25.050 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:25.050 "strip_size_kb": 64, 00:11:25.050 "state": "online", 00:11:25.050 "raid_level": "raid0", 00:11:25.050 "superblock": true, 00:11:25.050 "num_base_bdevs": 4, 00:11:25.050 "num_base_bdevs_discovered": 4, 00:11:25.050 "num_base_bdevs_operational": 4, 00:11:25.050 "base_bdevs_list": [ 00:11:25.050 { 00:11:25.050 "name": "pt1", 00:11:25.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.050 "is_configured": true, 00:11:25.050 "data_offset": 2048, 00:11:25.050 "data_size": 63488 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "name": "pt2", 00:11:25.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.050 "is_configured": true, 00:11:25.050 "data_offset": 2048, 00:11:25.050 "data_size": 63488 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "name": "pt3", 00:11:25.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.050 "is_configured": true, 00:11:25.050 "data_offset": 2048, 00:11:25.050 "data_size": 63488 00:11:25.050 }, 00:11:25.050 { 00:11:25.050 "name": "pt4", 00:11:25.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.050 "is_configured": true, 00:11:25.050 "data_offset": 2048, 00:11:25.050 "data_size": 63488 00:11:25.050 } 00:11:25.050 ] 00:11:25.050 } 00:11:25.050 } 00:11:25.050 }' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:25.050 pt2 00:11:25.050 pt3 00:11:25.050 pt4' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.050 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:25.309 [2024-11-04 11:43:50.694754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4 ']' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 [2024-11-04 11:43:50.742288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.309 [2024-11-04 11:43:50.742322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.309 [2024-11-04 11:43:50.742435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.309 [2024-11-04 11:43:50.742512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.309 [2024-11-04 11:43:50.742528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.309 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.610 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 [2024-11-04 11:43:50.910043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:25.611 [2024-11-04 11:43:50.912083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:25.611 [2024-11-04 11:43:50.912165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:25.611 [2024-11-04 11:43:50.912203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:25.611 [2024-11-04 11:43:50.912260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:25.611 [2024-11-04 11:43:50.912309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:25.611 [2024-11-04 11:43:50.912329] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:25.611 [2024-11-04 11:43:50.912350] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:25.611 [2024-11-04 11:43:50.912365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.611 [2024-11-04 11:43:50.912379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:25.611 request: 00:11:25.611 { 00:11:25.611 "name": "raid_bdev1", 00:11:25.611 "raid_level": "raid0", 00:11:25.611 "base_bdevs": [ 00:11:25.611 "malloc1", 00:11:25.611 "malloc2", 00:11:25.611 "malloc3", 00:11:25.611 "malloc4" 00:11:25.611 ], 00:11:25.611 "strip_size_kb": 64, 00:11:25.611 "superblock": false, 00:11:25.611 "method": "bdev_raid_create", 00:11:25.611 "req_id": 1 00:11:25.611 } 00:11:25.611 Got JSON-RPC error response 00:11:25.611 response: 00:11:25.611 { 00:11:25.611 "code": -17, 00:11:25.611 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:25.611 } 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 [2024-11-04 11:43:50.973895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.611 [2024-11-04 11:43:50.973962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.611 [2024-11-04 11:43:50.973982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.611 [2024-11-04 11:43:50.973994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.611 [2024-11-04 11:43:50.976469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.611 [2024-11-04 11:43:50.976508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.611 [2024-11-04 11:43:50.976601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:25.611 [2024-11-04 11:43:50.976680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.611 pt1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.611 11:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.611 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.611 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.611 "name": "raid_bdev1", 00:11:25.611 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:25.611 "strip_size_kb": 64, 00:11:25.611 "state": "configuring", 00:11:25.611 "raid_level": "raid0", 00:11:25.611 "superblock": true, 00:11:25.611 "num_base_bdevs": 4, 00:11:25.611 "num_base_bdevs_discovered": 1, 00:11:25.611 "num_base_bdevs_operational": 4, 00:11:25.611 "base_bdevs_list": [ 00:11:25.611 { 00:11:25.611 "name": "pt1", 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.611 "is_configured": true, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": null, 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.611 "is_configured": false, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": null, 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.611 "is_configured": false, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 }, 00:11:25.611 { 00:11:25.611 "name": null, 00:11:25.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.611 "is_configured": false, 00:11:25.611 "data_offset": 2048, 00:11:25.611 "data_size": 63488 00:11:25.611 } 00:11:25.611 ] 00:11:25.611 }' 00:11:25.611 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.611 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.186 [2024-11-04 11:43:51.429135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.186 [2024-11-04 11:43:51.429213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.186 [2024-11-04 11:43:51.429236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:26.186 [2024-11-04 11:43:51.429250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.186 [2024-11-04 11:43:51.429764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.186 [2024-11-04 11:43:51.429800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.186 [2024-11-04 11:43:51.429900] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.186 [2024-11-04 11:43:51.429933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.186 pt2 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.186 [2024-11-04 11:43:51.437115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.186 "name": "raid_bdev1", 00:11:26.186 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:26.186 "strip_size_kb": 64, 00:11:26.186 "state": "configuring", 00:11:26.186 "raid_level": "raid0", 00:11:26.186 "superblock": true, 00:11:26.186 "num_base_bdevs": 4, 00:11:26.186 "num_base_bdevs_discovered": 1, 00:11:26.186 "num_base_bdevs_operational": 4, 00:11:26.186 "base_bdevs_list": [ 00:11:26.186 { 00:11:26.186 "name": "pt1", 00:11:26.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.186 "is_configured": true, 00:11:26.186 "data_offset": 2048, 00:11:26.186 "data_size": 63488 00:11:26.186 }, 00:11:26.186 { 00:11:26.186 "name": null, 00:11:26.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.186 "is_configured": false, 00:11:26.186 "data_offset": 0, 00:11:26.186 "data_size": 63488 00:11:26.186 }, 00:11:26.186 { 00:11:26.186 "name": null, 00:11:26.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.186 "is_configured": false, 00:11:26.186 "data_offset": 2048, 00:11:26.186 "data_size": 63488 00:11:26.186 }, 00:11:26.186 { 00:11:26.186 "name": null, 00:11:26.186 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.186 "is_configured": false, 00:11:26.186 "data_offset": 2048, 00:11:26.186 "data_size": 63488 00:11:26.186 } 00:11:26.186 ] 00:11:26.186 }' 00:11:26.186 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.187 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.444 [2024-11-04 11:43:51.940285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.444 [2024-11-04 11:43:51.940356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.444 [2024-11-04 11:43:51.940378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:26.444 [2024-11-04 11:43:51.940388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.444 [2024-11-04 11:43:51.940889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.444 [2024-11-04 11:43:51.940909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.444 [2024-11-04 11:43:51.941005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.444 [2024-11-04 11:43:51.941028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.444 pt2 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.444 [2024-11-04 11:43:51.952267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.444 [2024-11-04 11:43:51.952328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.444 [2024-11-04 11:43:51.952357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:26.444 [2024-11-04 11:43:51.952368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.444 [2024-11-04 11:43:51.952911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.444 [2024-11-04 11:43:51.952937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.444 [2024-11-04 11:43:51.953033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.444 [2024-11-04 11:43:51.953075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.444 pt3 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.444 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.444 [2024-11-04 11:43:51.964204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:26.444 [2024-11-04 11:43:51.964258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.444 [2024-11-04 11:43:51.964280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:26.444 [2024-11-04 11:43:51.964290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.444 [2024-11-04 11:43:51.964745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.444 [2024-11-04 11:43:51.964770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:26.444 [2024-11-04 11:43:51.964845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:26.444 [2024-11-04 11:43:51.964880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:26.444 [2024-11-04 11:43:51.965035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.444 [2024-11-04 11:43:51.965048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:26.702 [2024-11-04 11:43:51.965318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:26.702 [2024-11-04 11:43:51.965510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.702 [2024-11-04 11:43:51.965532] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.702 [2024-11-04 11:43:51.965686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.702 pt4 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.702 11:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.702 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.702 "name": "raid_bdev1", 00:11:26.702 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:26.702 "strip_size_kb": 64, 00:11:26.702 "state": "online", 00:11:26.702 "raid_level": "raid0", 00:11:26.702 "superblock": true, 00:11:26.702 "num_base_bdevs": 4, 00:11:26.702 "num_base_bdevs_discovered": 4, 00:11:26.702 "num_base_bdevs_operational": 4, 00:11:26.702 "base_bdevs_list": [ 00:11:26.702 { 00:11:26.702 "name": "pt1", 00:11:26.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.702 "is_configured": true, 00:11:26.702 "data_offset": 2048, 00:11:26.702 "data_size": 63488 00:11:26.702 }, 00:11:26.702 { 00:11:26.702 "name": "pt2", 00:11:26.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.703 "is_configured": true, 00:11:26.703 "data_offset": 2048, 00:11:26.703 "data_size": 63488 00:11:26.703 }, 00:11:26.703 { 00:11:26.703 "name": "pt3", 00:11:26.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.703 "is_configured": true, 00:11:26.703 "data_offset": 2048, 00:11:26.703 "data_size": 63488 00:11:26.703 }, 00:11:26.703 { 00:11:26.703 "name": "pt4", 00:11:26.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.703 "is_configured": true, 00:11:26.703 "data_offset": 2048, 00:11:26.703 "data_size": 63488 00:11:26.703 } 00:11:26.703 ] 00:11:26.703 }' 00:11:26.703 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.703 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.960 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.961 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.961 [2024-11-04 11:43:52.459815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.219 "name": "raid_bdev1", 00:11:27.219 "aliases": [ 00:11:27.219 "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4" 00:11:27.219 ], 00:11:27.219 "product_name": "Raid Volume", 00:11:27.219 "block_size": 512, 00:11:27.219 "num_blocks": 253952, 00:11:27.219 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:27.219 "assigned_rate_limits": { 00:11:27.219 "rw_ios_per_sec": 0, 00:11:27.219 "rw_mbytes_per_sec": 0, 00:11:27.219 "r_mbytes_per_sec": 0, 00:11:27.219 "w_mbytes_per_sec": 0 00:11:27.219 }, 00:11:27.219 "claimed": false, 00:11:27.219 "zoned": false, 00:11:27.219 "supported_io_types": { 00:11:27.219 "read": true, 00:11:27.219 "write": true, 00:11:27.219 "unmap": true, 00:11:27.219 "flush": true, 00:11:27.219 "reset": true, 00:11:27.219 "nvme_admin": false, 00:11:27.219 "nvme_io": false, 00:11:27.219 "nvme_io_md": false, 00:11:27.219 "write_zeroes": true, 00:11:27.219 "zcopy": false, 00:11:27.219 "get_zone_info": false, 00:11:27.219 "zone_management": false, 00:11:27.219 "zone_append": false, 00:11:27.219 "compare": false, 00:11:27.219 "compare_and_write": false, 00:11:27.219 "abort": false, 00:11:27.219 "seek_hole": false, 00:11:27.219 "seek_data": false, 00:11:27.219 "copy": false, 00:11:27.219 "nvme_iov_md": false 00:11:27.219 }, 00:11:27.219 "memory_domains": [ 00:11:27.219 { 00:11:27.219 "dma_device_id": "system", 00:11:27.219 "dma_device_type": 1 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.219 "dma_device_type": 2 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "system", 00:11:27.219 "dma_device_type": 1 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.219 "dma_device_type": 2 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "system", 00:11:27.219 "dma_device_type": 1 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.219 "dma_device_type": 2 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "system", 00:11:27.219 "dma_device_type": 1 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.219 "dma_device_type": 2 00:11:27.219 } 00:11:27.219 ], 00:11:27.219 "driver_specific": { 00:11:27.219 "raid": { 00:11:27.219 "uuid": "bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4", 00:11:27.219 "strip_size_kb": 64, 00:11:27.219 "state": "online", 00:11:27.219 "raid_level": "raid0", 00:11:27.219 "superblock": true, 00:11:27.219 "num_base_bdevs": 4, 00:11:27.219 "num_base_bdevs_discovered": 4, 00:11:27.219 "num_base_bdevs_operational": 4, 00:11:27.219 "base_bdevs_list": [ 00:11:27.219 { 00:11:27.219 "name": "pt1", 00:11:27.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.219 "is_configured": true, 00:11:27.219 "data_offset": 2048, 00:11:27.219 "data_size": 63488 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "name": "pt2", 00:11:27.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.219 "is_configured": true, 00:11:27.219 "data_offset": 2048, 00:11:27.219 "data_size": 63488 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "name": "pt3", 00:11:27.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.219 "is_configured": true, 00:11:27.219 "data_offset": 2048, 00:11:27.219 "data_size": 63488 00:11:27.219 }, 00:11:27.219 { 00:11:27.219 "name": "pt4", 00:11:27.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.219 "is_configured": true, 00:11:27.219 "data_offset": 2048, 00:11:27.219 "data_size": 63488 00:11:27.219 } 00:11:27.219 ] 00:11:27.219 } 00:11:27.219 } 00:11:27.219 }' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.219 pt2 00:11:27.219 pt3 00:11:27.219 pt4' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.219 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:27.477 [2024-11-04 11:43:52.791263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4 '!=' bc5cded1-18c8-43d2-8b3b-2cbd5e4c1fa4 ']' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70958 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70958 ']' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70958 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70958 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.477 killing process with pid 70958 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70958' 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70958 00:11:27.477 [2024-11-04 11:43:52.878085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.477 [2024-11-04 11:43:52.878197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.477 11:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70958 00:11:27.477 [2024-11-04 11:43:52.878285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.477 [2024-11-04 11:43:52.878297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:28.043 [2024-11-04 11:43:53.301544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.976 11:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:28.976 00:11:28.976 real 0m5.842s 00:11:28.976 user 0m8.411s 00:11:28.976 sys 0m1.005s 00:11:28.976 11:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.976 11:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.976 ************************************ 00:11:28.976 END TEST raid_superblock_test 00:11:28.976 ************************************ 00:11:29.234 11:43:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:29.234 11:43:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:29.234 11:43:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.234 11:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.234 ************************************ 00:11:29.234 START TEST raid_read_error_test 00:11:29.234 ************************************ 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3VPryr4iKC 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71226 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71226 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71226 ']' 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.234 11:43:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.234 [2024-11-04 11:43:54.642781] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:29.234 [2024-11-04 11:43:54.642897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71226 ] 00:11:29.491 [2024-11-04 11:43:54.821795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.491 [2024-11-04 11:43:54.939469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.750 [2024-11-04 11:43:55.154656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.750 [2024-11-04 11:43:55.154701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.008 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 BaseBdev1_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 true 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 [2024-11-04 11:43:55.569924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.267 [2024-11-04 11:43:55.570003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.267 [2024-11-04 11:43:55.570026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:30.267 [2024-11-04 11:43:55.570037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.267 [2024-11-04 11:43:55.572227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.267 [2024-11-04 11:43:55.572265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.267 BaseBdev1 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 BaseBdev2_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 true 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 [2024-11-04 11:43:55.637277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.267 [2024-11-04 11:43:55.637365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.267 [2024-11-04 11:43:55.637388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:30.267 [2024-11-04 11:43:55.637412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.267 [2024-11-04 11:43:55.639941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.267 [2024-11-04 11:43:55.639989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.267 BaseBdev2 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 BaseBdev3_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 true 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 [2024-11-04 11:43:55.713653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:30.267 [2024-11-04 11:43:55.713709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.267 [2024-11-04 11:43:55.713730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:30.267 [2024-11-04 11:43:55.713740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.267 [2024-11-04 11:43:55.716075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.267 [2024-11-04 11:43:55.716126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.267 BaseBdev3 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 BaseBdev4_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 true 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 [2024-11-04 11:43:55.779793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:30.267 [2024-11-04 11:43:55.779850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.267 [2024-11-04 11:43:55.779873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.267 [2024-11-04 11:43:55.779883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.267 [2024-11-04 11:43:55.782191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.267 [2024-11-04 11:43:55.782229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.267 BaseBdev4 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.267 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.525 [2024-11-04 11:43:55.791841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.525 [2024-11-04 11:43:55.793785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.525 [2024-11-04 11:43:55.793865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.525 [2024-11-04 11:43:55.793929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.525 [2024-11-04 11:43:55.794173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:30.525 [2024-11-04 11:43:55.794209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.525 [2024-11-04 11:43:55.794510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:30.525 [2024-11-04 11:43:55.794714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:30.525 [2024-11-04 11:43:55.794734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:30.525 [2024-11-04 11:43:55.794934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.525 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.525 "name": "raid_bdev1", 00:11:30.525 "uuid": "40f6c0e9-17b0-4b65-9b59-99fc933774e1", 00:11:30.525 "strip_size_kb": 64, 00:11:30.525 "state": "online", 00:11:30.525 "raid_level": "raid0", 00:11:30.525 "superblock": true, 00:11:30.525 "num_base_bdevs": 4, 00:11:30.525 "num_base_bdevs_discovered": 4, 00:11:30.525 "num_base_bdevs_operational": 4, 00:11:30.525 "base_bdevs_list": [ 00:11:30.525 { 00:11:30.525 "name": "BaseBdev1", 00:11:30.525 "uuid": "23d5eb09-c523-5f45-bae2-030c66c9d30e", 00:11:30.525 "is_configured": true, 00:11:30.525 "data_offset": 2048, 00:11:30.525 "data_size": 63488 00:11:30.525 }, 00:11:30.525 { 00:11:30.525 "name": "BaseBdev2", 00:11:30.525 "uuid": "2b038980-0308-5822-b871-365a9bc46f72", 00:11:30.525 "is_configured": true, 00:11:30.526 "data_offset": 2048, 00:11:30.526 "data_size": 63488 00:11:30.526 }, 00:11:30.526 { 00:11:30.526 "name": "BaseBdev3", 00:11:30.526 "uuid": "301fc782-7f2d-5d55-b502-f951f5c2550d", 00:11:30.526 "is_configured": true, 00:11:30.526 "data_offset": 2048, 00:11:30.526 "data_size": 63488 00:11:30.526 }, 00:11:30.526 { 00:11:30.526 "name": "BaseBdev4", 00:11:30.526 "uuid": "87cb0f3f-b465-59f9-a25c-452632abe47c", 00:11:30.526 "is_configured": true, 00:11:30.526 "data_offset": 2048, 00:11:30.526 "data_size": 63488 00:11:30.526 } 00:11:30.526 ] 00:11:30.526 }' 00:11:30.526 11:43:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.526 11:43:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.784 11:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:30.784 11:43:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:30.784 [2024-11-04 11:43:56.300577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.716 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.973 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.973 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.973 "name": "raid_bdev1", 00:11:31.973 "uuid": "40f6c0e9-17b0-4b65-9b59-99fc933774e1", 00:11:31.973 "strip_size_kb": 64, 00:11:31.973 "state": "online", 00:11:31.973 "raid_level": "raid0", 00:11:31.973 "superblock": true, 00:11:31.973 "num_base_bdevs": 4, 00:11:31.973 "num_base_bdevs_discovered": 4, 00:11:31.973 "num_base_bdevs_operational": 4, 00:11:31.973 "base_bdevs_list": [ 00:11:31.973 { 00:11:31.973 "name": "BaseBdev1", 00:11:31.973 "uuid": "23d5eb09-c523-5f45-bae2-030c66c9d30e", 00:11:31.973 "is_configured": true, 00:11:31.973 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 }, 00:11:31.973 { 00:11:31.973 "name": "BaseBdev2", 00:11:31.973 "uuid": "2b038980-0308-5822-b871-365a9bc46f72", 00:11:31.973 "is_configured": true, 00:11:31.973 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 }, 00:11:31.973 { 00:11:31.973 "name": "BaseBdev3", 00:11:31.973 "uuid": "301fc782-7f2d-5d55-b502-f951f5c2550d", 00:11:31.973 "is_configured": true, 00:11:31.973 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 }, 00:11:31.973 { 00:11:31.973 "name": "BaseBdev4", 00:11:31.973 "uuid": "87cb0f3f-b465-59f9-a25c-452632abe47c", 00:11:31.973 "is_configured": true, 00:11:31.973 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 } 00:11:31.973 ] 00:11:31.973 }' 00:11:31.973 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.973 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.231 [2024-11-04 11:43:57.692954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.231 [2024-11-04 11:43:57.692997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.231 [2024-11-04 11:43:57.696062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.231 [2024-11-04 11:43:57.696142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.231 [2024-11-04 11:43:57.696194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.231 [2024-11-04 11:43:57.696207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:32.231 { 00:11:32.231 "results": [ 00:11:32.231 { 00:11:32.231 "job": "raid_bdev1", 00:11:32.231 "core_mask": "0x1", 00:11:32.231 "workload": "randrw", 00:11:32.231 "percentage": 50, 00:11:32.231 "status": "finished", 00:11:32.231 "queue_depth": 1, 00:11:32.231 "io_size": 131072, 00:11:32.231 "runtime": 1.393048, 00:11:32.231 "iops": 14461.813232566286, 00:11:32.231 "mibps": 1807.7266540707858, 00:11:32.231 "io_failed": 1, 00:11:32.231 "io_timeout": 0, 00:11:32.231 "avg_latency_us": 96.10110491381793, 00:11:32.231 "min_latency_us": 26.941484716157206, 00:11:32.231 "max_latency_us": 1502.46288209607 00:11:32.231 } 00:11:32.231 ], 00:11:32.231 "core_count": 1 00:11:32.231 } 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71226 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71226 ']' 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71226 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71226 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:32.231 killing process with pid 71226 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71226' 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71226 00:11:32.231 [2024-11-04 11:43:57.735306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.231 11:43:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71226 00:11:32.798 [2024-11-04 11:43:58.075146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.171 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3VPryr4iKC 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:34.172 00:11:34.172 real 0m4.758s 00:11:34.172 user 0m5.618s 00:11:34.172 sys 0m0.548s 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.172 11:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.172 ************************************ 00:11:34.172 END TEST raid_read_error_test 00:11:34.172 ************************************ 00:11:34.172 11:43:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:34.172 11:43:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:34.172 11:43:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.172 11:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.172 ************************************ 00:11:34.172 START TEST raid_write_error_test 00:11:34.172 ************************************ 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.clSUguSr91 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71368 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71368 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71368 ']' 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.172 11:43:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.172 [2024-11-04 11:43:59.497416] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:34.172 [2024-11-04 11:43:59.497614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71368 ] 00:11:34.172 [2024-11-04 11:43:59.682873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.429 [2024-11-04 11:43:59.813236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.687 [2024-11-04 11:44:00.042308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.687 [2024-11-04 11:44:00.042393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.944 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.945 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:34.945 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.945 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.945 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.945 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 BaseBdev1_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 true 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 [2024-11-04 11:44:00.485387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:35.203 [2024-11-04 11:44:00.485471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.203 [2024-11-04 11:44:00.485496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:35.203 [2024-11-04 11:44:00.485510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.203 [2024-11-04 11:44:00.487787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.203 [2024-11-04 11:44:00.487834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:35.203 BaseBdev1 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 BaseBdev2_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 true 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 [2024-11-04 11:44:00.553199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:35.203 [2024-11-04 11:44:00.553276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.203 [2024-11-04 11:44:00.553310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:35.203 [2024-11-04 11:44:00.553324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.203 [2024-11-04 11:44:00.555561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.203 [2024-11-04 11:44:00.555603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:35.203 BaseBdev2 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 BaseBdev3_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 true 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.203 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.203 [2024-11-04 11:44:00.628172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:35.203 [2024-11-04 11:44:00.628257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.203 [2024-11-04 11:44:00.628285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:35.203 [2024-11-04 11:44:00.628301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.203 [2024-11-04 11:44:00.630819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.204 [2024-11-04 11:44:00.630860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:35.204 BaseBdev3 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.204 BaseBdev4_malloc 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.204 true 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.204 [2024-11-04 11:44:00.700302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:35.204 [2024-11-04 11:44:00.700366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.204 [2024-11-04 11:44:00.700391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:35.204 [2024-11-04 11:44:00.700418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.204 [2024-11-04 11:44:00.702761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.204 [2024-11-04 11:44:00.702802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:35.204 BaseBdev4 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.204 [2024-11-04 11:44:00.712360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.204 [2024-11-04 11:44:00.714427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.204 [2024-11-04 11:44:00.714536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.204 [2024-11-04 11:44:00.714674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.204 [2024-11-04 11:44:00.715006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:35.204 [2024-11-04 11:44:00.715041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.204 [2024-11-04 11:44:00.715438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:35.204 [2024-11-04 11:44:00.715672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:35.204 [2024-11-04 11:44:00.715695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:35.204 [2024-11-04 11:44:00.715921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.204 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.462 "name": "raid_bdev1", 00:11:35.462 "uuid": "f3f37a03-c828-41f4-9cf6-56555f7c9944", 00:11:35.462 "strip_size_kb": 64, 00:11:35.462 "state": "online", 00:11:35.462 "raid_level": "raid0", 00:11:35.462 "superblock": true, 00:11:35.462 "num_base_bdevs": 4, 00:11:35.462 "num_base_bdevs_discovered": 4, 00:11:35.462 "num_base_bdevs_operational": 4, 00:11:35.462 "base_bdevs_list": [ 00:11:35.462 { 00:11:35.462 "name": "BaseBdev1", 00:11:35.462 "uuid": "77657b20-ca23-54ee-95c7-2531399f71c4", 00:11:35.462 "is_configured": true, 00:11:35.462 "data_offset": 2048, 00:11:35.462 "data_size": 63488 00:11:35.462 }, 00:11:35.462 { 00:11:35.462 "name": "BaseBdev2", 00:11:35.462 "uuid": "ab909a45-a9df-571f-972b-1412369aeb73", 00:11:35.462 "is_configured": true, 00:11:35.462 "data_offset": 2048, 00:11:35.462 "data_size": 63488 00:11:35.462 }, 00:11:35.462 { 00:11:35.462 "name": "BaseBdev3", 00:11:35.462 "uuid": "e560da38-36e0-565b-897b-449ffd01642f", 00:11:35.462 "is_configured": true, 00:11:35.462 "data_offset": 2048, 00:11:35.462 "data_size": 63488 00:11:35.462 }, 00:11:35.462 { 00:11:35.462 "name": "BaseBdev4", 00:11:35.462 "uuid": "32d35d58-e913-5c99-8c33-e74827068174", 00:11:35.462 "is_configured": true, 00:11:35.462 "data_offset": 2048, 00:11:35.462 "data_size": 63488 00:11:35.462 } 00:11:35.462 ] 00:11:35.462 }' 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.462 11:44:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.719 11:44:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:35.719 11:44:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.977 [2024-11-04 11:44:01.285107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.910 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.910 "name": "raid_bdev1", 00:11:36.910 "uuid": "f3f37a03-c828-41f4-9cf6-56555f7c9944", 00:11:36.911 "strip_size_kb": 64, 00:11:36.911 "state": "online", 00:11:36.911 "raid_level": "raid0", 00:11:36.911 "superblock": true, 00:11:36.911 "num_base_bdevs": 4, 00:11:36.911 "num_base_bdevs_discovered": 4, 00:11:36.911 "num_base_bdevs_operational": 4, 00:11:36.911 "base_bdevs_list": [ 00:11:36.911 { 00:11:36.911 "name": "BaseBdev1", 00:11:36.911 "uuid": "77657b20-ca23-54ee-95c7-2531399f71c4", 00:11:36.911 "is_configured": true, 00:11:36.911 "data_offset": 2048, 00:11:36.911 "data_size": 63488 00:11:36.911 }, 00:11:36.911 { 00:11:36.911 "name": "BaseBdev2", 00:11:36.911 "uuid": "ab909a45-a9df-571f-972b-1412369aeb73", 00:11:36.911 "is_configured": true, 00:11:36.911 "data_offset": 2048, 00:11:36.911 "data_size": 63488 00:11:36.911 }, 00:11:36.911 { 00:11:36.911 "name": "BaseBdev3", 00:11:36.911 "uuid": "e560da38-36e0-565b-897b-449ffd01642f", 00:11:36.911 "is_configured": true, 00:11:36.911 "data_offset": 2048, 00:11:36.911 "data_size": 63488 00:11:36.911 }, 00:11:36.911 { 00:11:36.911 "name": "BaseBdev4", 00:11:36.911 "uuid": "32d35d58-e913-5c99-8c33-e74827068174", 00:11:36.911 "is_configured": true, 00:11:36.911 "data_offset": 2048, 00:11:36.911 "data_size": 63488 00:11:36.911 } 00:11:36.911 ] 00:11:36.911 }' 00:11:36.911 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.911 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.169 [2024-11-04 11:44:02.649933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.169 [2024-11-04 11:44:02.649980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.169 [2024-11-04 11:44:02.652973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.169 [2024-11-04 11:44:02.653053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.169 [2024-11-04 11:44:02.653105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.169 [2024-11-04 11:44:02.653120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:37.169 { 00:11:37.169 "results": [ 00:11:37.169 { 00:11:37.169 "job": "raid_bdev1", 00:11:37.169 "core_mask": "0x1", 00:11:37.169 "workload": "randrw", 00:11:37.169 "percentage": 50, 00:11:37.169 "status": "finished", 00:11:37.169 "queue_depth": 1, 00:11:37.169 "io_size": 131072, 00:11:37.169 "runtime": 1.365217, 00:11:37.169 "iops": 13758.252351091438, 00:11:37.169 "mibps": 1719.7815438864297, 00:11:37.169 "io_failed": 1, 00:11:37.169 "io_timeout": 0, 00:11:37.169 "avg_latency_us": 100.87304107184038, 00:11:37.169 "min_latency_us": 29.289082969432314, 00:11:37.169 "max_latency_us": 1581.1633187772925 00:11:37.169 } 00:11:37.169 ], 00:11:37.169 "core_count": 1 00:11:37.169 } 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71368 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71368 ']' 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71368 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.169 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71368 00:11:37.428 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.428 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.428 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71368' 00:11:37.428 killing process with pid 71368 00:11:37.428 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71368 00:11:37.428 [2024-11-04 11:44:02.692251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.428 11:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71368 00:11:37.686 [2024-11-04 11:44:03.030667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.clSUguSr91 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:39.064 00:11:39.064 real 0m4.922s 00:11:39.064 user 0m5.862s 00:11:39.064 sys 0m0.633s 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.064 11:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 ************************************ 00:11:39.064 END TEST raid_write_error_test 00:11:39.064 ************************************ 00:11:39.064 11:44:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:39.064 11:44:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:39.064 11:44:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:39.064 11:44:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.064 11:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 ************************************ 00:11:39.064 START TEST raid_state_function_test 00:11:39.064 ************************************ 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71517 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71517' 00:11:39.064 Process raid pid: 71517 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71517 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71517 ']' 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.064 11:44:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.064 [2024-11-04 11:44:04.450479] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:39.064 [2024-11-04 11:44:04.450596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.323 [2024-11-04 11:44:04.613432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.323 [2024-11-04 11:44:04.736577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.582 [2024-11-04 11:44:04.970087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.582 [2024-11-04 11:44:04.970146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.840 [2024-11-04 11:44:05.340429] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.840 [2024-11-04 11:44:05.340501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.840 [2024-11-04 11:44:05.340515] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.840 [2024-11-04 11:44:05.340529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.840 [2024-11-04 11:44:05.340538] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:39.840 [2024-11-04 11:44:05.340551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:39.840 [2024-11-04 11:44:05.340560] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:39.840 [2024-11-04 11:44:05.340573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.840 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.099 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.099 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.099 "name": "Existed_Raid", 00:11:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.099 "strip_size_kb": 64, 00:11:40.099 "state": "configuring", 00:11:40.099 "raid_level": "concat", 00:11:40.099 "superblock": false, 00:11:40.099 "num_base_bdevs": 4, 00:11:40.099 "num_base_bdevs_discovered": 0, 00:11:40.099 "num_base_bdevs_operational": 4, 00:11:40.099 "base_bdevs_list": [ 00:11:40.099 { 00:11:40.099 "name": "BaseBdev1", 00:11:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.099 "is_configured": false, 00:11:40.099 "data_offset": 0, 00:11:40.099 "data_size": 0 00:11:40.099 }, 00:11:40.099 { 00:11:40.099 "name": "BaseBdev2", 00:11:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.099 "is_configured": false, 00:11:40.099 "data_offset": 0, 00:11:40.099 "data_size": 0 00:11:40.099 }, 00:11:40.099 { 00:11:40.099 "name": "BaseBdev3", 00:11:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.099 "is_configured": false, 00:11:40.099 "data_offset": 0, 00:11:40.099 "data_size": 0 00:11:40.099 }, 00:11:40.099 { 00:11:40.099 "name": "BaseBdev4", 00:11:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.099 "is_configured": false, 00:11:40.099 "data_offset": 0, 00:11:40.099 "data_size": 0 00:11:40.099 } 00:11:40.099 ] 00:11:40.099 }' 00:11:40.099 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.099 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.357 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.357 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.357 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.357 [2024-11-04 11:44:05.795627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.357 [2024-11-04 11:44:05.795681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:40.357 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.357 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 [2024-11-04 11:44:05.803614] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.358 [2024-11-04 11:44:05.803671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.358 [2024-11-04 11:44:05.803683] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.358 [2024-11-04 11:44:05.803695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.358 [2024-11-04 11:44:05.803703] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.358 [2024-11-04 11:44:05.803714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.358 [2024-11-04 11:44:05.803725] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.358 [2024-11-04 11:44:05.803742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 [2024-11-04 11:44:05.852336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.358 BaseBdev1 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.358 [ 00:11:40.358 { 00:11:40.358 "name": "BaseBdev1", 00:11:40.358 "aliases": [ 00:11:40.358 "3855ce3e-1d80-4d70-aea5-bc2e48702ba5" 00:11:40.358 ], 00:11:40.358 "product_name": "Malloc disk", 00:11:40.358 "block_size": 512, 00:11:40.358 "num_blocks": 65536, 00:11:40.358 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:40.358 "assigned_rate_limits": { 00:11:40.358 "rw_ios_per_sec": 0, 00:11:40.358 "rw_mbytes_per_sec": 0, 00:11:40.358 "r_mbytes_per_sec": 0, 00:11:40.358 "w_mbytes_per_sec": 0 00:11:40.358 }, 00:11:40.358 "claimed": true, 00:11:40.358 "claim_type": "exclusive_write", 00:11:40.358 "zoned": false, 00:11:40.358 "supported_io_types": { 00:11:40.358 "read": true, 00:11:40.358 "write": true, 00:11:40.358 "unmap": true, 00:11:40.358 "flush": true, 00:11:40.358 "reset": true, 00:11:40.358 "nvme_admin": false, 00:11:40.358 "nvme_io": false, 00:11:40.358 "nvme_io_md": false, 00:11:40.358 "write_zeroes": true, 00:11:40.358 "zcopy": true, 00:11:40.358 "get_zone_info": false, 00:11:40.358 "zone_management": false, 00:11:40.358 "zone_append": false, 00:11:40.358 "compare": false, 00:11:40.358 "compare_and_write": false, 00:11:40.358 "abort": true, 00:11:40.358 "seek_hole": false, 00:11:40.358 "seek_data": false, 00:11:40.358 "copy": true, 00:11:40.358 "nvme_iov_md": false 00:11:40.358 }, 00:11:40.358 "memory_domains": [ 00:11:40.358 { 00:11:40.358 "dma_device_id": "system", 00:11:40.358 "dma_device_type": 1 00:11:40.358 }, 00:11:40.358 { 00:11:40.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.358 "dma_device_type": 2 00:11:40.358 } 00:11:40.358 ], 00:11:40.358 "driver_specific": {} 00:11:40.358 } 00:11:40.358 ] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.358 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.616 "name": "Existed_Raid", 00:11:40.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.616 "strip_size_kb": 64, 00:11:40.616 "state": "configuring", 00:11:40.616 "raid_level": "concat", 00:11:40.616 "superblock": false, 00:11:40.616 "num_base_bdevs": 4, 00:11:40.616 "num_base_bdevs_discovered": 1, 00:11:40.616 "num_base_bdevs_operational": 4, 00:11:40.616 "base_bdevs_list": [ 00:11:40.616 { 00:11:40.616 "name": "BaseBdev1", 00:11:40.616 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:40.616 "is_configured": true, 00:11:40.616 "data_offset": 0, 00:11:40.616 "data_size": 65536 00:11:40.616 }, 00:11:40.616 { 00:11:40.616 "name": "BaseBdev2", 00:11:40.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.616 "is_configured": false, 00:11:40.616 "data_offset": 0, 00:11:40.616 "data_size": 0 00:11:40.616 }, 00:11:40.616 { 00:11:40.616 "name": "BaseBdev3", 00:11:40.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.616 "is_configured": false, 00:11:40.616 "data_offset": 0, 00:11:40.616 "data_size": 0 00:11:40.616 }, 00:11:40.616 { 00:11:40.616 "name": "BaseBdev4", 00:11:40.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.616 "is_configured": false, 00:11:40.616 "data_offset": 0, 00:11:40.616 "data_size": 0 00:11:40.616 } 00:11:40.616 ] 00:11:40.616 }' 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.616 11:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.874 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.874 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.874 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.874 [2024-11-04 11:44:06.331722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.874 [2024-11-04 11:44:06.331796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:40.874 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.874 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.875 [2024-11-04 11:44:06.343764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.875 [2024-11-04 11:44:06.345891] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.875 [2024-11-04 11:44:06.345951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.875 [2024-11-04 11:44:06.345965] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.875 [2024-11-04 11:44:06.345980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.875 [2024-11-04 11:44:06.345989] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.875 [2024-11-04 11:44:06.346002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.875 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.133 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.133 "name": "Existed_Raid", 00:11:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.133 "strip_size_kb": 64, 00:11:41.133 "state": "configuring", 00:11:41.133 "raid_level": "concat", 00:11:41.133 "superblock": false, 00:11:41.133 "num_base_bdevs": 4, 00:11:41.133 "num_base_bdevs_discovered": 1, 00:11:41.133 "num_base_bdevs_operational": 4, 00:11:41.133 "base_bdevs_list": [ 00:11:41.133 { 00:11:41.133 "name": "BaseBdev1", 00:11:41.133 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:41.133 "is_configured": true, 00:11:41.133 "data_offset": 0, 00:11:41.133 "data_size": 65536 00:11:41.133 }, 00:11:41.133 { 00:11:41.133 "name": "BaseBdev2", 00:11:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.133 "is_configured": false, 00:11:41.133 "data_offset": 0, 00:11:41.133 "data_size": 0 00:11:41.133 }, 00:11:41.133 { 00:11:41.133 "name": "BaseBdev3", 00:11:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.133 "is_configured": false, 00:11:41.133 "data_offset": 0, 00:11:41.133 "data_size": 0 00:11:41.133 }, 00:11:41.133 { 00:11:41.133 "name": "BaseBdev4", 00:11:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.133 "is_configured": false, 00:11:41.133 "data_offset": 0, 00:11:41.133 "data_size": 0 00:11:41.133 } 00:11:41.133 ] 00:11:41.133 }' 00:11:41.133 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.133 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 [2024-11-04 11:44:06.872951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.391 BaseBdev2 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 [ 00:11:41.391 { 00:11:41.391 "name": "BaseBdev2", 00:11:41.391 "aliases": [ 00:11:41.391 "19d36849-67ca-4812-8262-cd54e077c38a" 00:11:41.391 ], 00:11:41.391 "product_name": "Malloc disk", 00:11:41.391 "block_size": 512, 00:11:41.391 "num_blocks": 65536, 00:11:41.391 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:41.391 "assigned_rate_limits": { 00:11:41.391 "rw_ios_per_sec": 0, 00:11:41.391 "rw_mbytes_per_sec": 0, 00:11:41.391 "r_mbytes_per_sec": 0, 00:11:41.391 "w_mbytes_per_sec": 0 00:11:41.391 }, 00:11:41.391 "claimed": true, 00:11:41.391 "claim_type": "exclusive_write", 00:11:41.391 "zoned": false, 00:11:41.391 "supported_io_types": { 00:11:41.391 "read": true, 00:11:41.391 "write": true, 00:11:41.391 "unmap": true, 00:11:41.391 "flush": true, 00:11:41.391 "reset": true, 00:11:41.391 "nvme_admin": false, 00:11:41.391 "nvme_io": false, 00:11:41.391 "nvme_io_md": false, 00:11:41.391 "write_zeroes": true, 00:11:41.391 "zcopy": true, 00:11:41.391 "get_zone_info": false, 00:11:41.391 "zone_management": false, 00:11:41.391 "zone_append": false, 00:11:41.391 "compare": false, 00:11:41.391 "compare_and_write": false, 00:11:41.391 "abort": true, 00:11:41.391 "seek_hole": false, 00:11:41.391 "seek_data": false, 00:11:41.391 "copy": true, 00:11:41.391 "nvme_iov_md": false 00:11:41.391 }, 00:11:41.391 "memory_domains": [ 00:11:41.391 { 00:11:41.391 "dma_device_id": "system", 00:11:41.391 "dma_device_type": 1 00:11:41.391 }, 00:11:41.391 { 00:11:41.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.391 "dma_device_type": 2 00:11:41.391 } 00:11:41.391 ], 00:11:41.391 "driver_specific": {} 00:11:41.391 } 00:11:41.391 ] 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.650 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.650 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.650 "name": "Existed_Raid", 00:11:41.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.650 "strip_size_kb": 64, 00:11:41.650 "state": "configuring", 00:11:41.650 "raid_level": "concat", 00:11:41.650 "superblock": false, 00:11:41.650 "num_base_bdevs": 4, 00:11:41.650 "num_base_bdevs_discovered": 2, 00:11:41.650 "num_base_bdevs_operational": 4, 00:11:41.650 "base_bdevs_list": [ 00:11:41.650 { 00:11:41.650 "name": "BaseBdev1", 00:11:41.650 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 0, 00:11:41.650 "data_size": 65536 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev2", 00:11:41.650 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 0, 00:11:41.650 "data_size": 65536 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev3", 00:11:41.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.650 "is_configured": false, 00:11:41.650 "data_offset": 0, 00:11:41.650 "data_size": 0 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev4", 00:11:41.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.650 "is_configured": false, 00:11:41.650 "data_offset": 0, 00:11:41.650 "data_size": 0 00:11:41.650 } 00:11:41.650 ] 00:11:41.650 }' 00:11:41.650 11:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.650 11:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 [2024-11-04 11:44:07.389200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.909 BaseBdev3 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 [ 00:11:41.909 { 00:11:41.909 "name": "BaseBdev3", 00:11:41.909 "aliases": [ 00:11:41.909 "441d3ea0-6361-47f1-8286-9836c23c04a6" 00:11:41.909 ], 00:11:41.909 "product_name": "Malloc disk", 00:11:41.909 "block_size": 512, 00:11:41.909 "num_blocks": 65536, 00:11:41.909 "uuid": "441d3ea0-6361-47f1-8286-9836c23c04a6", 00:11:41.909 "assigned_rate_limits": { 00:11:41.909 "rw_ios_per_sec": 0, 00:11:41.909 "rw_mbytes_per_sec": 0, 00:11:41.909 "r_mbytes_per_sec": 0, 00:11:41.909 "w_mbytes_per_sec": 0 00:11:41.909 }, 00:11:41.909 "claimed": true, 00:11:41.909 "claim_type": "exclusive_write", 00:11:41.909 "zoned": false, 00:11:41.909 "supported_io_types": { 00:11:41.909 "read": true, 00:11:41.909 "write": true, 00:11:41.909 "unmap": true, 00:11:41.909 "flush": true, 00:11:41.909 "reset": true, 00:11:41.909 "nvme_admin": false, 00:11:41.909 "nvme_io": false, 00:11:41.909 "nvme_io_md": false, 00:11:41.909 "write_zeroes": true, 00:11:41.909 "zcopy": true, 00:11:41.909 "get_zone_info": false, 00:11:41.909 "zone_management": false, 00:11:41.909 "zone_append": false, 00:11:41.909 "compare": false, 00:11:41.909 "compare_and_write": false, 00:11:41.909 "abort": true, 00:11:41.909 "seek_hole": false, 00:11:41.909 "seek_data": false, 00:11:41.909 "copy": true, 00:11:41.909 "nvme_iov_md": false 00:11:41.909 }, 00:11:41.909 "memory_domains": [ 00:11:41.909 { 00:11:41.909 "dma_device_id": "system", 00:11:41.909 "dma_device_type": 1 00:11:41.909 }, 00:11:41.909 { 00:11:41.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.909 "dma_device_type": 2 00:11:41.909 } 00:11:41.909 ], 00:11:41.909 "driver_specific": {} 00:11:41.909 } 00:11:41.909 ] 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.909 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.168 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.168 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.168 "name": "Existed_Raid", 00:11:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.168 "strip_size_kb": 64, 00:11:42.168 "state": "configuring", 00:11:42.168 "raid_level": "concat", 00:11:42.168 "superblock": false, 00:11:42.168 "num_base_bdevs": 4, 00:11:42.168 "num_base_bdevs_discovered": 3, 00:11:42.168 "num_base_bdevs_operational": 4, 00:11:42.168 "base_bdevs_list": [ 00:11:42.168 { 00:11:42.168 "name": "BaseBdev1", 00:11:42.168 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:42.168 "is_configured": true, 00:11:42.168 "data_offset": 0, 00:11:42.168 "data_size": 65536 00:11:42.168 }, 00:11:42.168 { 00:11:42.168 "name": "BaseBdev2", 00:11:42.168 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:42.168 "is_configured": true, 00:11:42.168 "data_offset": 0, 00:11:42.168 "data_size": 65536 00:11:42.168 }, 00:11:42.168 { 00:11:42.168 "name": "BaseBdev3", 00:11:42.168 "uuid": "441d3ea0-6361-47f1-8286-9836c23c04a6", 00:11:42.168 "is_configured": true, 00:11:42.168 "data_offset": 0, 00:11:42.168 "data_size": 65536 00:11:42.168 }, 00:11:42.168 { 00:11:42.168 "name": "BaseBdev4", 00:11:42.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.168 "is_configured": false, 00:11:42.168 "data_offset": 0, 00:11:42.168 "data_size": 0 00:11:42.168 } 00:11:42.168 ] 00:11:42.168 }' 00:11:42.168 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.168 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.427 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:42.427 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.427 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.427 [2024-11-04 11:44:07.886872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.427 [2024-11-04 11:44:07.886947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.427 [2024-11-04 11:44:07.886958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:42.427 [2024-11-04 11:44:07.887332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.427 [2024-11-04 11:44:07.887641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.428 [2024-11-04 11:44:07.887673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:42.428 [2024-11-04 11:44:07.888046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.428 BaseBdev4 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.428 [ 00:11:42.428 { 00:11:42.428 "name": "BaseBdev4", 00:11:42.428 "aliases": [ 00:11:42.428 "7fff82c4-ce4a-4b7e-afb9-d42d876e5a6a" 00:11:42.428 ], 00:11:42.428 "product_name": "Malloc disk", 00:11:42.428 "block_size": 512, 00:11:42.428 "num_blocks": 65536, 00:11:42.428 "uuid": "7fff82c4-ce4a-4b7e-afb9-d42d876e5a6a", 00:11:42.428 "assigned_rate_limits": { 00:11:42.428 "rw_ios_per_sec": 0, 00:11:42.428 "rw_mbytes_per_sec": 0, 00:11:42.428 "r_mbytes_per_sec": 0, 00:11:42.428 "w_mbytes_per_sec": 0 00:11:42.428 }, 00:11:42.428 "claimed": true, 00:11:42.428 "claim_type": "exclusive_write", 00:11:42.428 "zoned": false, 00:11:42.428 "supported_io_types": { 00:11:42.428 "read": true, 00:11:42.428 "write": true, 00:11:42.428 "unmap": true, 00:11:42.428 "flush": true, 00:11:42.428 "reset": true, 00:11:42.428 "nvme_admin": false, 00:11:42.428 "nvme_io": false, 00:11:42.428 "nvme_io_md": false, 00:11:42.428 "write_zeroes": true, 00:11:42.428 "zcopy": true, 00:11:42.428 "get_zone_info": false, 00:11:42.428 "zone_management": false, 00:11:42.428 "zone_append": false, 00:11:42.428 "compare": false, 00:11:42.428 "compare_and_write": false, 00:11:42.428 "abort": true, 00:11:42.428 "seek_hole": false, 00:11:42.428 "seek_data": false, 00:11:42.428 "copy": true, 00:11:42.428 "nvme_iov_md": false 00:11:42.428 }, 00:11:42.428 "memory_domains": [ 00:11:42.428 { 00:11:42.428 "dma_device_id": "system", 00:11:42.428 "dma_device_type": 1 00:11:42.428 }, 00:11:42.428 { 00:11:42.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.428 "dma_device_type": 2 00:11:42.428 } 00:11:42.428 ], 00:11:42.428 "driver_specific": {} 00:11:42.428 } 00:11:42.428 ] 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.428 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.687 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.687 "name": "Existed_Raid", 00:11:42.687 "uuid": "1e920364-7d7f-48bb-99da-909be4572e39", 00:11:42.687 "strip_size_kb": 64, 00:11:42.687 "state": "online", 00:11:42.687 "raid_level": "concat", 00:11:42.687 "superblock": false, 00:11:42.687 "num_base_bdevs": 4, 00:11:42.687 "num_base_bdevs_discovered": 4, 00:11:42.687 "num_base_bdevs_operational": 4, 00:11:42.687 "base_bdevs_list": [ 00:11:42.687 { 00:11:42.687 "name": "BaseBdev1", 00:11:42.687 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:42.687 "is_configured": true, 00:11:42.687 "data_offset": 0, 00:11:42.687 "data_size": 65536 00:11:42.687 }, 00:11:42.687 { 00:11:42.687 "name": "BaseBdev2", 00:11:42.687 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:42.687 "is_configured": true, 00:11:42.687 "data_offset": 0, 00:11:42.687 "data_size": 65536 00:11:42.687 }, 00:11:42.687 { 00:11:42.687 "name": "BaseBdev3", 00:11:42.687 "uuid": "441d3ea0-6361-47f1-8286-9836c23c04a6", 00:11:42.687 "is_configured": true, 00:11:42.687 "data_offset": 0, 00:11:42.687 "data_size": 65536 00:11:42.687 }, 00:11:42.687 { 00:11:42.687 "name": "BaseBdev4", 00:11:42.687 "uuid": "7fff82c4-ce4a-4b7e-afb9-d42d876e5a6a", 00:11:42.687 "is_configured": true, 00:11:42.687 "data_offset": 0, 00:11:42.687 "data_size": 65536 00:11:42.687 } 00:11:42.687 ] 00:11:42.687 }' 00:11:42.687 11:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.687 11:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.946 [2024-11-04 11:44:08.342619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.946 "name": "Existed_Raid", 00:11:42.946 "aliases": [ 00:11:42.946 "1e920364-7d7f-48bb-99da-909be4572e39" 00:11:42.946 ], 00:11:42.946 "product_name": "Raid Volume", 00:11:42.946 "block_size": 512, 00:11:42.946 "num_blocks": 262144, 00:11:42.946 "uuid": "1e920364-7d7f-48bb-99da-909be4572e39", 00:11:42.946 "assigned_rate_limits": { 00:11:42.946 "rw_ios_per_sec": 0, 00:11:42.946 "rw_mbytes_per_sec": 0, 00:11:42.946 "r_mbytes_per_sec": 0, 00:11:42.946 "w_mbytes_per_sec": 0 00:11:42.946 }, 00:11:42.946 "claimed": false, 00:11:42.946 "zoned": false, 00:11:42.946 "supported_io_types": { 00:11:42.946 "read": true, 00:11:42.946 "write": true, 00:11:42.946 "unmap": true, 00:11:42.946 "flush": true, 00:11:42.946 "reset": true, 00:11:42.946 "nvme_admin": false, 00:11:42.946 "nvme_io": false, 00:11:42.946 "nvme_io_md": false, 00:11:42.946 "write_zeroes": true, 00:11:42.946 "zcopy": false, 00:11:42.946 "get_zone_info": false, 00:11:42.946 "zone_management": false, 00:11:42.946 "zone_append": false, 00:11:42.946 "compare": false, 00:11:42.946 "compare_and_write": false, 00:11:42.946 "abort": false, 00:11:42.946 "seek_hole": false, 00:11:42.946 "seek_data": false, 00:11:42.946 "copy": false, 00:11:42.946 "nvme_iov_md": false 00:11:42.946 }, 00:11:42.946 "memory_domains": [ 00:11:42.946 { 00:11:42.946 "dma_device_id": "system", 00:11:42.946 "dma_device_type": 1 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.946 "dma_device_type": 2 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "system", 00:11:42.946 "dma_device_type": 1 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.946 "dma_device_type": 2 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "system", 00:11:42.946 "dma_device_type": 1 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.946 "dma_device_type": 2 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "system", 00:11:42.946 "dma_device_type": 1 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.946 "dma_device_type": 2 00:11:42.946 } 00:11:42.946 ], 00:11:42.946 "driver_specific": { 00:11:42.946 "raid": { 00:11:42.946 "uuid": "1e920364-7d7f-48bb-99da-909be4572e39", 00:11:42.946 "strip_size_kb": 64, 00:11:42.946 "state": "online", 00:11:42.946 "raid_level": "concat", 00:11:42.946 "superblock": false, 00:11:42.946 "num_base_bdevs": 4, 00:11:42.946 "num_base_bdevs_discovered": 4, 00:11:42.946 "num_base_bdevs_operational": 4, 00:11:42.946 "base_bdevs_list": [ 00:11:42.946 { 00:11:42.946 "name": "BaseBdev1", 00:11:42.946 "uuid": "3855ce3e-1d80-4d70-aea5-bc2e48702ba5", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "name": "BaseBdev2", 00:11:42.946 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "name": "BaseBdev3", 00:11:42.946 "uuid": "441d3ea0-6361-47f1-8286-9836c23c04a6", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 }, 00:11:42.946 { 00:11:42.946 "name": "BaseBdev4", 00:11:42.946 "uuid": "7fff82c4-ce4a-4b7e-afb9-d42d876e5a6a", 00:11:42.946 "is_configured": true, 00:11:42.946 "data_offset": 0, 00:11:42.946 "data_size": 65536 00:11:42.946 } 00:11:42.946 ] 00:11:42.946 } 00:11:42.946 } 00:11:42.946 }' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:42.946 BaseBdev2 00:11:42.946 BaseBdev3 00:11:42.946 BaseBdev4' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.946 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.947 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:42.947 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.947 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.947 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.205 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.205 [2024-11-04 11:44:08.637780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.205 [2024-11-04 11:44:08.637821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.205 [2024-11-04 11:44:08.637876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.464 "name": "Existed_Raid", 00:11:43.464 "uuid": "1e920364-7d7f-48bb-99da-909be4572e39", 00:11:43.464 "strip_size_kb": 64, 00:11:43.464 "state": "offline", 00:11:43.464 "raid_level": "concat", 00:11:43.464 "superblock": false, 00:11:43.464 "num_base_bdevs": 4, 00:11:43.464 "num_base_bdevs_discovered": 3, 00:11:43.464 "num_base_bdevs_operational": 3, 00:11:43.464 "base_bdevs_list": [ 00:11:43.464 { 00:11:43.464 "name": null, 00:11:43.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.464 "is_configured": false, 00:11:43.464 "data_offset": 0, 00:11:43.464 "data_size": 65536 00:11:43.464 }, 00:11:43.464 { 00:11:43.464 "name": "BaseBdev2", 00:11:43.464 "uuid": "19d36849-67ca-4812-8262-cd54e077c38a", 00:11:43.464 "is_configured": true, 00:11:43.464 "data_offset": 0, 00:11:43.464 "data_size": 65536 00:11:43.464 }, 00:11:43.464 { 00:11:43.464 "name": "BaseBdev3", 00:11:43.464 "uuid": "441d3ea0-6361-47f1-8286-9836c23c04a6", 00:11:43.464 "is_configured": true, 00:11:43.464 "data_offset": 0, 00:11:43.464 "data_size": 65536 00:11:43.464 }, 00:11:43.464 { 00:11:43.464 "name": "BaseBdev4", 00:11:43.464 "uuid": "7fff82c4-ce4a-4b7e-afb9-d42d876e5a6a", 00:11:43.464 "is_configured": true, 00:11:43.464 "data_offset": 0, 00:11:43.464 "data_size": 65536 00:11:43.464 } 00:11:43.464 ] 00:11:43.464 }' 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.464 11:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.722 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.981 [2024-11-04 11:44:09.261261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.981 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.981 [2024-11-04 11:44:09.419007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.239 [2024-11-04 11:44:09.581132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:44.239 [2024-11-04 11:44:09.581194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.239 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:44.240 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.240 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.240 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.240 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.240 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.498 BaseBdev2 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.498 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.498 [ 00:11:44.498 { 00:11:44.498 "name": "BaseBdev2", 00:11:44.498 "aliases": [ 00:11:44.498 "066b625c-8a27-49f9-b357-8659a6c602a5" 00:11:44.498 ], 00:11:44.498 "product_name": "Malloc disk", 00:11:44.498 "block_size": 512, 00:11:44.498 "num_blocks": 65536, 00:11:44.498 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:44.498 "assigned_rate_limits": { 00:11:44.498 "rw_ios_per_sec": 0, 00:11:44.498 "rw_mbytes_per_sec": 0, 00:11:44.498 "r_mbytes_per_sec": 0, 00:11:44.498 "w_mbytes_per_sec": 0 00:11:44.498 }, 00:11:44.498 "claimed": false, 00:11:44.498 "zoned": false, 00:11:44.498 "supported_io_types": { 00:11:44.498 "read": true, 00:11:44.498 "write": true, 00:11:44.498 "unmap": true, 00:11:44.498 "flush": true, 00:11:44.498 "reset": true, 00:11:44.498 "nvme_admin": false, 00:11:44.498 "nvme_io": false, 00:11:44.498 "nvme_io_md": false, 00:11:44.498 "write_zeroes": true, 00:11:44.498 "zcopy": true, 00:11:44.498 "get_zone_info": false, 00:11:44.498 "zone_management": false, 00:11:44.498 "zone_append": false, 00:11:44.498 "compare": false, 00:11:44.498 "compare_and_write": false, 00:11:44.498 "abort": true, 00:11:44.498 "seek_hole": false, 00:11:44.498 "seek_data": false, 00:11:44.499 "copy": true, 00:11:44.499 "nvme_iov_md": false 00:11:44.499 }, 00:11:44.499 "memory_domains": [ 00:11:44.499 { 00:11:44.499 "dma_device_id": "system", 00:11:44.499 "dma_device_type": 1 00:11:44.499 }, 00:11:44.499 { 00:11:44.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.499 "dma_device_type": 2 00:11:44.499 } 00:11:44.499 ], 00:11:44.499 "driver_specific": {} 00:11:44.499 } 00:11:44.499 ] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 BaseBdev3 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 [ 00:11:44.499 { 00:11:44.499 "name": "BaseBdev3", 00:11:44.499 "aliases": [ 00:11:44.499 "513747f6-ba61-4bea-bc52-bfc62e312196" 00:11:44.499 ], 00:11:44.499 "product_name": "Malloc disk", 00:11:44.499 "block_size": 512, 00:11:44.499 "num_blocks": 65536, 00:11:44.499 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:44.499 "assigned_rate_limits": { 00:11:44.499 "rw_ios_per_sec": 0, 00:11:44.499 "rw_mbytes_per_sec": 0, 00:11:44.499 "r_mbytes_per_sec": 0, 00:11:44.499 "w_mbytes_per_sec": 0 00:11:44.499 }, 00:11:44.499 "claimed": false, 00:11:44.499 "zoned": false, 00:11:44.499 "supported_io_types": { 00:11:44.499 "read": true, 00:11:44.499 "write": true, 00:11:44.499 "unmap": true, 00:11:44.499 "flush": true, 00:11:44.499 "reset": true, 00:11:44.499 "nvme_admin": false, 00:11:44.499 "nvme_io": false, 00:11:44.499 "nvme_io_md": false, 00:11:44.499 "write_zeroes": true, 00:11:44.499 "zcopy": true, 00:11:44.499 "get_zone_info": false, 00:11:44.499 "zone_management": false, 00:11:44.499 "zone_append": false, 00:11:44.499 "compare": false, 00:11:44.499 "compare_and_write": false, 00:11:44.499 "abort": true, 00:11:44.499 "seek_hole": false, 00:11:44.499 "seek_data": false, 00:11:44.499 "copy": true, 00:11:44.499 "nvme_iov_md": false 00:11:44.499 }, 00:11:44.499 "memory_domains": [ 00:11:44.499 { 00:11:44.499 "dma_device_id": "system", 00:11:44.499 "dma_device_type": 1 00:11:44.499 }, 00:11:44.499 { 00:11:44.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.499 "dma_device_type": 2 00:11:44.499 } 00:11:44.499 ], 00:11:44.499 "driver_specific": {} 00:11:44.499 } 00:11:44.499 ] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 BaseBdev4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 [ 00:11:44.499 { 00:11:44.499 "name": "BaseBdev4", 00:11:44.499 "aliases": [ 00:11:44.499 "04f39688-97fb-4714-82aa-10818e5b669a" 00:11:44.499 ], 00:11:44.499 "product_name": "Malloc disk", 00:11:44.499 "block_size": 512, 00:11:44.499 "num_blocks": 65536, 00:11:44.499 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:44.499 "assigned_rate_limits": { 00:11:44.499 "rw_ios_per_sec": 0, 00:11:44.499 "rw_mbytes_per_sec": 0, 00:11:44.499 "r_mbytes_per_sec": 0, 00:11:44.499 "w_mbytes_per_sec": 0 00:11:44.499 }, 00:11:44.499 "claimed": false, 00:11:44.499 "zoned": false, 00:11:44.499 "supported_io_types": { 00:11:44.499 "read": true, 00:11:44.499 "write": true, 00:11:44.499 "unmap": true, 00:11:44.499 "flush": true, 00:11:44.499 "reset": true, 00:11:44.499 "nvme_admin": false, 00:11:44.499 "nvme_io": false, 00:11:44.499 "nvme_io_md": false, 00:11:44.499 "write_zeroes": true, 00:11:44.499 "zcopy": true, 00:11:44.499 "get_zone_info": false, 00:11:44.499 "zone_management": false, 00:11:44.499 "zone_append": false, 00:11:44.499 "compare": false, 00:11:44.499 "compare_and_write": false, 00:11:44.499 "abort": true, 00:11:44.499 "seek_hole": false, 00:11:44.499 "seek_data": false, 00:11:44.499 "copy": true, 00:11:44.499 "nvme_iov_md": false 00:11:44.499 }, 00:11:44.499 "memory_domains": [ 00:11:44.499 { 00:11:44.499 "dma_device_id": "system", 00:11:44.499 "dma_device_type": 1 00:11:44.499 }, 00:11:44.499 { 00:11:44.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.499 "dma_device_type": 2 00:11:44.499 } 00:11:44.499 ], 00:11:44.499 "driver_specific": {} 00:11:44.499 } 00:11:44.499 ] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.499 [2024-11-04 11:44:09.949307] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.499 [2024-11-04 11:44:09.949392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.499 [2024-11-04 11:44:09.949434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.499 [2024-11-04 11:44:09.951619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.499 [2024-11-04 11:44:09.951704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.499 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.500 "name": "Existed_Raid", 00:11:44.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.500 "strip_size_kb": 64, 00:11:44.500 "state": "configuring", 00:11:44.500 "raid_level": "concat", 00:11:44.500 "superblock": false, 00:11:44.500 "num_base_bdevs": 4, 00:11:44.500 "num_base_bdevs_discovered": 3, 00:11:44.500 "num_base_bdevs_operational": 4, 00:11:44.500 "base_bdevs_list": [ 00:11:44.500 { 00:11:44.500 "name": "BaseBdev1", 00:11:44.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.500 "is_configured": false, 00:11:44.500 "data_offset": 0, 00:11:44.500 "data_size": 0 00:11:44.500 }, 00:11:44.500 { 00:11:44.500 "name": "BaseBdev2", 00:11:44.500 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:44.500 "is_configured": true, 00:11:44.500 "data_offset": 0, 00:11:44.500 "data_size": 65536 00:11:44.500 }, 00:11:44.500 { 00:11:44.500 "name": "BaseBdev3", 00:11:44.500 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:44.500 "is_configured": true, 00:11:44.500 "data_offset": 0, 00:11:44.500 "data_size": 65536 00:11:44.500 }, 00:11:44.500 { 00:11:44.500 "name": "BaseBdev4", 00:11:44.500 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:44.500 "is_configured": true, 00:11:44.500 "data_offset": 0, 00:11:44.500 "data_size": 65536 00:11:44.500 } 00:11:44.500 ] 00:11:44.500 }' 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.500 11:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.066 [2024-11-04 11:44:10.372657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.066 "name": "Existed_Raid", 00:11:45.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.066 "strip_size_kb": 64, 00:11:45.066 "state": "configuring", 00:11:45.066 "raid_level": "concat", 00:11:45.066 "superblock": false, 00:11:45.066 "num_base_bdevs": 4, 00:11:45.066 "num_base_bdevs_discovered": 2, 00:11:45.066 "num_base_bdevs_operational": 4, 00:11:45.066 "base_bdevs_list": [ 00:11:45.066 { 00:11:45.066 "name": "BaseBdev1", 00:11:45.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.066 "is_configured": false, 00:11:45.066 "data_offset": 0, 00:11:45.066 "data_size": 0 00:11:45.066 }, 00:11:45.066 { 00:11:45.066 "name": null, 00:11:45.066 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:45.066 "is_configured": false, 00:11:45.066 "data_offset": 0, 00:11:45.066 "data_size": 65536 00:11:45.066 }, 00:11:45.066 { 00:11:45.066 "name": "BaseBdev3", 00:11:45.066 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:45.066 "is_configured": true, 00:11:45.066 "data_offset": 0, 00:11:45.066 "data_size": 65536 00:11:45.066 }, 00:11:45.066 { 00:11:45.066 "name": "BaseBdev4", 00:11:45.066 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:45.066 "is_configured": true, 00:11:45.066 "data_offset": 0, 00:11:45.066 "data_size": 65536 00:11:45.066 } 00:11:45.066 ] 00:11:45.066 }' 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.066 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.324 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 [2024-11-04 11:44:10.856195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.582 BaseBdev1 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 [ 00:11:45.582 { 00:11:45.582 "name": "BaseBdev1", 00:11:45.582 "aliases": [ 00:11:45.582 "043c184f-72ee-4473-9529-e24504b2274a" 00:11:45.582 ], 00:11:45.582 "product_name": "Malloc disk", 00:11:45.582 "block_size": 512, 00:11:45.582 "num_blocks": 65536, 00:11:45.582 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:45.582 "assigned_rate_limits": { 00:11:45.582 "rw_ios_per_sec": 0, 00:11:45.582 "rw_mbytes_per_sec": 0, 00:11:45.582 "r_mbytes_per_sec": 0, 00:11:45.582 "w_mbytes_per_sec": 0 00:11:45.582 }, 00:11:45.582 "claimed": true, 00:11:45.582 "claim_type": "exclusive_write", 00:11:45.582 "zoned": false, 00:11:45.582 "supported_io_types": { 00:11:45.582 "read": true, 00:11:45.582 "write": true, 00:11:45.582 "unmap": true, 00:11:45.582 "flush": true, 00:11:45.582 "reset": true, 00:11:45.582 "nvme_admin": false, 00:11:45.582 "nvme_io": false, 00:11:45.582 "nvme_io_md": false, 00:11:45.582 "write_zeroes": true, 00:11:45.582 "zcopy": true, 00:11:45.582 "get_zone_info": false, 00:11:45.582 "zone_management": false, 00:11:45.582 "zone_append": false, 00:11:45.582 "compare": false, 00:11:45.582 "compare_and_write": false, 00:11:45.582 "abort": true, 00:11:45.582 "seek_hole": false, 00:11:45.582 "seek_data": false, 00:11:45.582 "copy": true, 00:11:45.582 "nvme_iov_md": false 00:11:45.582 }, 00:11:45.582 "memory_domains": [ 00:11:45.582 { 00:11:45.582 "dma_device_id": "system", 00:11:45.582 "dma_device_type": 1 00:11:45.582 }, 00:11:45.582 { 00:11:45.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.582 "dma_device_type": 2 00:11:45.582 } 00:11:45.582 ], 00:11:45.582 "driver_specific": {} 00:11:45.582 } 00:11:45.582 ] 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.582 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.583 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.583 "name": "Existed_Raid", 00:11:45.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.583 "strip_size_kb": 64, 00:11:45.583 "state": "configuring", 00:11:45.583 "raid_level": "concat", 00:11:45.583 "superblock": false, 00:11:45.583 "num_base_bdevs": 4, 00:11:45.583 "num_base_bdevs_discovered": 3, 00:11:45.583 "num_base_bdevs_operational": 4, 00:11:45.583 "base_bdevs_list": [ 00:11:45.583 { 00:11:45.583 "name": "BaseBdev1", 00:11:45.583 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:45.583 "is_configured": true, 00:11:45.583 "data_offset": 0, 00:11:45.583 "data_size": 65536 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": null, 00:11:45.583 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:45.583 "is_configured": false, 00:11:45.583 "data_offset": 0, 00:11:45.583 "data_size": 65536 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": "BaseBdev3", 00:11:45.583 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:45.583 "is_configured": true, 00:11:45.583 "data_offset": 0, 00:11:45.583 "data_size": 65536 00:11:45.583 }, 00:11:45.583 { 00:11:45.583 "name": "BaseBdev4", 00:11:45.583 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:45.583 "is_configured": true, 00:11:45.583 "data_offset": 0, 00:11:45.583 "data_size": 65536 00:11:45.583 } 00:11:45.583 ] 00:11:45.583 }' 00:11:45.583 11:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.583 11:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.841 [2024-11-04 11:44:11.319645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.841 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.099 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.100 "name": "Existed_Raid", 00:11:46.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.100 "strip_size_kb": 64, 00:11:46.100 "state": "configuring", 00:11:46.100 "raid_level": "concat", 00:11:46.100 "superblock": false, 00:11:46.100 "num_base_bdevs": 4, 00:11:46.100 "num_base_bdevs_discovered": 2, 00:11:46.100 "num_base_bdevs_operational": 4, 00:11:46.100 "base_bdevs_list": [ 00:11:46.100 { 00:11:46.100 "name": "BaseBdev1", 00:11:46.100 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:46.100 "is_configured": true, 00:11:46.100 "data_offset": 0, 00:11:46.100 "data_size": 65536 00:11:46.100 }, 00:11:46.100 { 00:11:46.100 "name": null, 00:11:46.100 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:46.100 "is_configured": false, 00:11:46.100 "data_offset": 0, 00:11:46.100 "data_size": 65536 00:11:46.100 }, 00:11:46.100 { 00:11:46.100 "name": null, 00:11:46.100 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:46.100 "is_configured": false, 00:11:46.100 "data_offset": 0, 00:11:46.100 "data_size": 65536 00:11:46.100 }, 00:11:46.100 { 00:11:46.100 "name": "BaseBdev4", 00:11:46.100 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:46.100 "is_configured": true, 00:11:46.100 "data_offset": 0, 00:11:46.100 "data_size": 65536 00:11:46.100 } 00:11:46.100 ] 00:11:46.100 }' 00:11:46.100 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.100 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 [2024-11-04 11:44:11.802890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.363 "name": "Existed_Raid", 00:11:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.363 "strip_size_kb": 64, 00:11:46.363 "state": "configuring", 00:11:46.363 "raid_level": "concat", 00:11:46.363 "superblock": false, 00:11:46.363 "num_base_bdevs": 4, 00:11:46.363 "num_base_bdevs_discovered": 3, 00:11:46.363 "num_base_bdevs_operational": 4, 00:11:46.363 "base_bdevs_list": [ 00:11:46.363 { 00:11:46.363 "name": "BaseBdev1", 00:11:46.363 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:46.363 "is_configured": true, 00:11:46.363 "data_offset": 0, 00:11:46.363 "data_size": 65536 00:11:46.363 }, 00:11:46.363 { 00:11:46.363 "name": null, 00:11:46.363 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:46.363 "is_configured": false, 00:11:46.363 "data_offset": 0, 00:11:46.363 "data_size": 65536 00:11:46.363 }, 00:11:46.363 { 00:11:46.363 "name": "BaseBdev3", 00:11:46.363 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:46.363 "is_configured": true, 00:11:46.363 "data_offset": 0, 00:11:46.363 "data_size": 65536 00:11:46.363 }, 00:11:46.363 { 00:11:46.363 "name": "BaseBdev4", 00:11:46.363 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:46.363 "is_configured": true, 00:11:46.363 "data_offset": 0, 00:11:46.363 "data_size": 65536 00:11:46.363 } 00:11:46.363 ] 00:11:46.363 }' 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.363 11:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.934 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 [2024-11-04 11:44:12.358056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.195 "name": "Existed_Raid", 00:11:47.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.195 "strip_size_kb": 64, 00:11:47.195 "state": "configuring", 00:11:47.195 "raid_level": "concat", 00:11:47.195 "superblock": false, 00:11:47.195 "num_base_bdevs": 4, 00:11:47.195 "num_base_bdevs_discovered": 2, 00:11:47.195 "num_base_bdevs_operational": 4, 00:11:47.195 "base_bdevs_list": [ 00:11:47.195 { 00:11:47.195 "name": null, 00:11:47.195 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:47.195 "is_configured": false, 00:11:47.195 "data_offset": 0, 00:11:47.195 "data_size": 65536 00:11:47.195 }, 00:11:47.195 { 00:11:47.195 "name": null, 00:11:47.195 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:47.195 "is_configured": false, 00:11:47.195 "data_offset": 0, 00:11:47.195 "data_size": 65536 00:11:47.195 }, 00:11:47.195 { 00:11:47.195 "name": "BaseBdev3", 00:11:47.195 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:47.195 "is_configured": true, 00:11:47.195 "data_offset": 0, 00:11:47.195 "data_size": 65536 00:11:47.195 }, 00:11:47.195 { 00:11:47.195 "name": "BaseBdev4", 00:11:47.195 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:47.195 "is_configured": true, 00:11:47.195 "data_offset": 0, 00:11:47.195 "data_size": 65536 00:11:47.195 } 00:11:47.195 ] 00:11:47.195 }' 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.195 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 [2024-11-04 11:44:12.966624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.456 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.714 11:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.714 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.714 "name": "Existed_Raid", 00:11:47.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.715 "strip_size_kb": 64, 00:11:47.715 "state": "configuring", 00:11:47.715 "raid_level": "concat", 00:11:47.715 "superblock": false, 00:11:47.715 "num_base_bdevs": 4, 00:11:47.715 "num_base_bdevs_discovered": 3, 00:11:47.715 "num_base_bdevs_operational": 4, 00:11:47.715 "base_bdevs_list": [ 00:11:47.715 { 00:11:47.715 "name": null, 00:11:47.715 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:47.715 "is_configured": false, 00:11:47.715 "data_offset": 0, 00:11:47.715 "data_size": 65536 00:11:47.715 }, 00:11:47.715 { 00:11:47.715 "name": "BaseBdev2", 00:11:47.715 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:47.715 "is_configured": true, 00:11:47.715 "data_offset": 0, 00:11:47.715 "data_size": 65536 00:11:47.715 }, 00:11:47.715 { 00:11:47.715 "name": "BaseBdev3", 00:11:47.715 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:47.715 "is_configured": true, 00:11:47.715 "data_offset": 0, 00:11:47.715 "data_size": 65536 00:11:47.715 }, 00:11:47.715 { 00:11:47.715 "name": "BaseBdev4", 00:11:47.715 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:47.715 "is_configured": true, 00:11:47.715 "data_offset": 0, 00:11:47.715 "data_size": 65536 00:11:47.715 } 00:11:47.715 ] 00:11:47.715 }' 00:11:47.715 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.715 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.972 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.972 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:47.972 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.972 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.972 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 043c184f-72ee-4473-9529-e24504b2274a 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.231 [2024-11-04 11:44:13.597048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:48.231 [2024-11-04 11:44:13.597132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.231 [2024-11-04 11:44:13.597143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:48.231 [2024-11-04 11:44:13.597526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:48.231 [2024-11-04 11:44:13.597759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.231 [2024-11-04 11:44:13.597789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:48.231 [2024-11-04 11:44:13.598128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.231 NewBaseBdev 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.231 [ 00:11:48.231 { 00:11:48.231 "name": "NewBaseBdev", 00:11:48.231 "aliases": [ 00:11:48.231 "043c184f-72ee-4473-9529-e24504b2274a" 00:11:48.231 ], 00:11:48.231 "product_name": "Malloc disk", 00:11:48.231 "block_size": 512, 00:11:48.231 "num_blocks": 65536, 00:11:48.231 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:48.231 "assigned_rate_limits": { 00:11:48.231 "rw_ios_per_sec": 0, 00:11:48.231 "rw_mbytes_per_sec": 0, 00:11:48.231 "r_mbytes_per_sec": 0, 00:11:48.231 "w_mbytes_per_sec": 0 00:11:48.231 }, 00:11:48.231 "claimed": true, 00:11:48.231 "claim_type": "exclusive_write", 00:11:48.231 "zoned": false, 00:11:48.231 "supported_io_types": { 00:11:48.231 "read": true, 00:11:48.231 "write": true, 00:11:48.231 "unmap": true, 00:11:48.231 "flush": true, 00:11:48.231 "reset": true, 00:11:48.231 "nvme_admin": false, 00:11:48.231 "nvme_io": false, 00:11:48.231 "nvme_io_md": false, 00:11:48.231 "write_zeroes": true, 00:11:48.231 "zcopy": true, 00:11:48.231 "get_zone_info": false, 00:11:48.231 "zone_management": false, 00:11:48.231 "zone_append": false, 00:11:48.231 "compare": false, 00:11:48.231 "compare_and_write": false, 00:11:48.231 "abort": true, 00:11:48.231 "seek_hole": false, 00:11:48.231 "seek_data": false, 00:11:48.231 "copy": true, 00:11:48.231 "nvme_iov_md": false 00:11:48.231 }, 00:11:48.231 "memory_domains": [ 00:11:48.231 { 00:11:48.231 "dma_device_id": "system", 00:11:48.231 "dma_device_type": 1 00:11:48.231 }, 00:11:48.231 { 00:11:48.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.231 "dma_device_type": 2 00:11:48.231 } 00:11:48.231 ], 00:11:48.231 "driver_specific": {} 00:11:48.231 } 00:11:48.231 ] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.231 "name": "Existed_Raid", 00:11:48.231 "uuid": "85d8c0b1-9f4e-4135-b5a6-aac15b52eebc", 00:11:48.231 "strip_size_kb": 64, 00:11:48.231 "state": "online", 00:11:48.231 "raid_level": "concat", 00:11:48.231 "superblock": false, 00:11:48.231 "num_base_bdevs": 4, 00:11:48.231 "num_base_bdevs_discovered": 4, 00:11:48.231 "num_base_bdevs_operational": 4, 00:11:48.231 "base_bdevs_list": [ 00:11:48.231 { 00:11:48.231 "name": "NewBaseBdev", 00:11:48.231 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:48.231 "is_configured": true, 00:11:48.231 "data_offset": 0, 00:11:48.231 "data_size": 65536 00:11:48.231 }, 00:11:48.231 { 00:11:48.231 "name": "BaseBdev2", 00:11:48.231 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:48.231 "is_configured": true, 00:11:48.231 "data_offset": 0, 00:11:48.231 "data_size": 65536 00:11:48.231 }, 00:11:48.231 { 00:11:48.231 "name": "BaseBdev3", 00:11:48.231 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:48.231 "is_configured": true, 00:11:48.231 "data_offset": 0, 00:11:48.231 "data_size": 65536 00:11:48.231 }, 00:11:48.231 { 00:11:48.231 "name": "BaseBdev4", 00:11:48.231 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:48.231 "is_configured": true, 00:11:48.231 "data_offset": 0, 00:11:48.231 "data_size": 65536 00:11:48.231 } 00:11:48.231 ] 00:11:48.231 }' 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.231 11:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.803 [2024-11-04 11:44:14.036867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.803 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.803 "name": "Existed_Raid", 00:11:48.803 "aliases": [ 00:11:48.803 "85d8c0b1-9f4e-4135-b5a6-aac15b52eebc" 00:11:48.803 ], 00:11:48.803 "product_name": "Raid Volume", 00:11:48.803 "block_size": 512, 00:11:48.803 "num_blocks": 262144, 00:11:48.803 "uuid": "85d8c0b1-9f4e-4135-b5a6-aac15b52eebc", 00:11:48.803 "assigned_rate_limits": { 00:11:48.803 "rw_ios_per_sec": 0, 00:11:48.803 "rw_mbytes_per_sec": 0, 00:11:48.803 "r_mbytes_per_sec": 0, 00:11:48.803 "w_mbytes_per_sec": 0 00:11:48.803 }, 00:11:48.803 "claimed": false, 00:11:48.803 "zoned": false, 00:11:48.803 "supported_io_types": { 00:11:48.803 "read": true, 00:11:48.803 "write": true, 00:11:48.803 "unmap": true, 00:11:48.803 "flush": true, 00:11:48.803 "reset": true, 00:11:48.803 "nvme_admin": false, 00:11:48.803 "nvme_io": false, 00:11:48.803 "nvme_io_md": false, 00:11:48.803 "write_zeroes": true, 00:11:48.803 "zcopy": false, 00:11:48.803 "get_zone_info": false, 00:11:48.803 "zone_management": false, 00:11:48.803 "zone_append": false, 00:11:48.803 "compare": false, 00:11:48.803 "compare_and_write": false, 00:11:48.803 "abort": false, 00:11:48.803 "seek_hole": false, 00:11:48.803 "seek_data": false, 00:11:48.803 "copy": false, 00:11:48.803 "nvme_iov_md": false 00:11:48.803 }, 00:11:48.803 "memory_domains": [ 00:11:48.803 { 00:11:48.803 "dma_device_id": "system", 00:11:48.803 "dma_device_type": 1 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.803 "dma_device_type": 2 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "system", 00:11:48.803 "dma_device_type": 1 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.803 "dma_device_type": 2 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "system", 00:11:48.803 "dma_device_type": 1 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.803 "dma_device_type": 2 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "system", 00:11:48.803 "dma_device_type": 1 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.803 "dma_device_type": 2 00:11:48.803 } 00:11:48.803 ], 00:11:48.803 "driver_specific": { 00:11:48.803 "raid": { 00:11:48.803 "uuid": "85d8c0b1-9f4e-4135-b5a6-aac15b52eebc", 00:11:48.803 "strip_size_kb": 64, 00:11:48.803 "state": "online", 00:11:48.803 "raid_level": "concat", 00:11:48.803 "superblock": false, 00:11:48.803 "num_base_bdevs": 4, 00:11:48.803 "num_base_bdevs_discovered": 4, 00:11:48.803 "num_base_bdevs_operational": 4, 00:11:48.803 "base_bdevs_list": [ 00:11:48.803 { 00:11:48.803 "name": "NewBaseBdev", 00:11:48.803 "uuid": "043c184f-72ee-4473-9529-e24504b2274a", 00:11:48.803 "is_configured": true, 00:11:48.803 "data_offset": 0, 00:11:48.803 "data_size": 65536 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "name": "BaseBdev2", 00:11:48.803 "uuid": "066b625c-8a27-49f9-b357-8659a6c602a5", 00:11:48.803 "is_configured": true, 00:11:48.803 "data_offset": 0, 00:11:48.803 "data_size": 65536 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "name": "BaseBdev3", 00:11:48.803 "uuid": "513747f6-ba61-4bea-bc52-bfc62e312196", 00:11:48.803 "is_configured": true, 00:11:48.803 "data_offset": 0, 00:11:48.803 "data_size": 65536 00:11:48.803 }, 00:11:48.803 { 00:11:48.803 "name": "BaseBdev4", 00:11:48.804 "uuid": "04f39688-97fb-4714-82aa-10818e5b669a", 00:11:48.804 "is_configured": true, 00:11:48.804 "data_offset": 0, 00:11:48.804 "data_size": 65536 00:11:48.804 } 00:11:48.804 ] 00:11:48.804 } 00:11:48.804 } 00:11:48.804 }' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:48.804 BaseBdev2 00:11:48.804 BaseBdev3 00:11:48.804 BaseBdev4' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.804 [2024-11-04 11:44:14.280032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.804 [2024-11-04 11:44:14.280070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.804 [2024-11-04 11:44:14.280167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.804 [2024-11-04 11:44:14.280242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.804 [2024-11-04 11:44:14.280253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71517 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71517 ']' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71517 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:48.804 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71517 00:11:49.064 killing process with pid 71517 00:11:49.064 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.064 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.064 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71517' 00:11:49.064 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71517 00:11:49.064 11:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71517 00:11:49.064 [2024-11-04 11:44:14.327017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.323 [2024-11-04 11:44:14.738185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.724 00:11:50.724 real 0m11.524s 00:11:50.724 user 0m18.224s 00:11:50.724 sys 0m1.988s 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 ************************************ 00:11:50.724 END TEST raid_state_function_test 00:11:50.724 ************************************ 00:11:50.724 11:44:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:50.724 11:44:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:50.724 11:44:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:50.724 11:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.724 ************************************ 00:11:50.724 START TEST raid_state_function_test_sb 00:11:50.724 ************************************ 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.724 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72186 00:11:50.725 Process raid pid: 72186 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72186' 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72186 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72186 ']' 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:50.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.725 11:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 [2024-11-04 11:44:16.050937] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:11:50.725 [2024-11-04 11:44:16.051555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.988 [2024-11-04 11:44:16.233686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.988 [2024-11-04 11:44:16.359243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.247 [2024-11-04 11:44:16.575532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.247 [2024-11-04 11:44:16.575625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.507 [2024-11-04 11:44:16.900186] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.507 [2024-11-04 11:44:16.900253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.507 [2024-11-04 11:44:16.900267] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.507 [2024-11-04 11:44:16.900281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.507 [2024-11-04 11:44:16.900291] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.507 [2024-11-04 11:44:16.900304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.507 [2024-11-04 11:44:16.900314] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.507 [2024-11-04 11:44:16.900326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.507 "name": "Existed_Raid", 00:11:51.507 "uuid": "2fb57d69-0c9b-457e-9cc7-432b40c79344", 00:11:51.507 "strip_size_kb": 64, 00:11:51.507 "state": "configuring", 00:11:51.507 "raid_level": "concat", 00:11:51.507 "superblock": true, 00:11:51.507 "num_base_bdevs": 4, 00:11:51.507 "num_base_bdevs_discovered": 0, 00:11:51.507 "num_base_bdevs_operational": 4, 00:11:51.507 "base_bdevs_list": [ 00:11:51.507 { 00:11:51.507 "name": "BaseBdev1", 00:11:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.507 "is_configured": false, 00:11:51.507 "data_offset": 0, 00:11:51.507 "data_size": 0 00:11:51.507 }, 00:11:51.507 { 00:11:51.507 "name": "BaseBdev2", 00:11:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.507 "is_configured": false, 00:11:51.507 "data_offset": 0, 00:11:51.507 "data_size": 0 00:11:51.507 }, 00:11:51.507 { 00:11:51.507 "name": "BaseBdev3", 00:11:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.507 "is_configured": false, 00:11:51.507 "data_offset": 0, 00:11:51.507 "data_size": 0 00:11:51.507 }, 00:11:51.507 { 00:11:51.507 "name": "BaseBdev4", 00:11:51.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.507 "is_configured": false, 00:11:51.507 "data_offset": 0, 00:11:51.507 "data_size": 0 00:11:51.507 } 00:11:51.507 ] 00:11:51.507 }' 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.507 11:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.074 [2024-11-04 11:44:17.323417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.074 [2024-11-04 11:44:17.323466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.074 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.074 [2024-11-04 11:44:17.331445] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.074 [2024-11-04 11:44:17.331586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.074 [2024-11-04 11:44:17.331642] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.075 [2024-11-04 11:44:17.331716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.075 [2024-11-04 11:44:17.331761] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.075 [2024-11-04 11:44:17.331830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.075 [2024-11-04 11:44:17.331874] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.075 [2024-11-04 11:44:17.331943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 [2024-11-04 11:44:17.379420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.075 BaseBdev1 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 [ 00:11:52.075 { 00:11:52.075 "name": "BaseBdev1", 00:11:52.075 "aliases": [ 00:11:52.075 "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace" 00:11:52.075 ], 00:11:52.075 "product_name": "Malloc disk", 00:11:52.075 "block_size": 512, 00:11:52.075 "num_blocks": 65536, 00:11:52.075 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:52.075 "assigned_rate_limits": { 00:11:52.075 "rw_ios_per_sec": 0, 00:11:52.075 "rw_mbytes_per_sec": 0, 00:11:52.075 "r_mbytes_per_sec": 0, 00:11:52.075 "w_mbytes_per_sec": 0 00:11:52.075 }, 00:11:52.075 "claimed": true, 00:11:52.075 "claim_type": "exclusive_write", 00:11:52.075 "zoned": false, 00:11:52.075 "supported_io_types": { 00:11:52.075 "read": true, 00:11:52.075 "write": true, 00:11:52.075 "unmap": true, 00:11:52.075 "flush": true, 00:11:52.075 "reset": true, 00:11:52.075 "nvme_admin": false, 00:11:52.075 "nvme_io": false, 00:11:52.075 "nvme_io_md": false, 00:11:52.075 "write_zeroes": true, 00:11:52.075 "zcopy": true, 00:11:52.075 "get_zone_info": false, 00:11:52.075 "zone_management": false, 00:11:52.075 "zone_append": false, 00:11:52.075 "compare": false, 00:11:52.075 "compare_and_write": false, 00:11:52.075 "abort": true, 00:11:52.075 "seek_hole": false, 00:11:52.075 "seek_data": false, 00:11:52.075 "copy": true, 00:11:52.075 "nvme_iov_md": false 00:11:52.075 }, 00:11:52.075 "memory_domains": [ 00:11:52.075 { 00:11:52.075 "dma_device_id": "system", 00:11:52.075 "dma_device_type": 1 00:11:52.075 }, 00:11:52.075 { 00:11:52.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.075 "dma_device_type": 2 00:11:52.075 } 00:11:52.075 ], 00:11:52.075 "driver_specific": {} 00:11:52.075 } 00:11:52.075 ] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.075 "name": "Existed_Raid", 00:11:52.075 "uuid": "c5bfe7fb-5d0c-433e-83a9-b404b4b4f781", 00:11:52.075 "strip_size_kb": 64, 00:11:52.075 "state": "configuring", 00:11:52.075 "raid_level": "concat", 00:11:52.075 "superblock": true, 00:11:52.075 "num_base_bdevs": 4, 00:11:52.075 "num_base_bdevs_discovered": 1, 00:11:52.075 "num_base_bdevs_operational": 4, 00:11:52.075 "base_bdevs_list": [ 00:11:52.075 { 00:11:52.075 "name": "BaseBdev1", 00:11:52.075 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:52.075 "is_configured": true, 00:11:52.075 "data_offset": 2048, 00:11:52.075 "data_size": 63488 00:11:52.075 }, 00:11:52.075 { 00:11:52.075 "name": "BaseBdev2", 00:11:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.075 "is_configured": false, 00:11:52.075 "data_offset": 0, 00:11:52.075 "data_size": 0 00:11:52.075 }, 00:11:52.075 { 00:11:52.075 "name": "BaseBdev3", 00:11:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.075 "is_configured": false, 00:11:52.075 "data_offset": 0, 00:11:52.075 "data_size": 0 00:11:52.075 }, 00:11:52.075 { 00:11:52.075 "name": "BaseBdev4", 00:11:52.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.075 "is_configured": false, 00:11:52.075 "data_offset": 0, 00:11:52.075 "data_size": 0 00:11:52.075 } 00:11:52.075 ] 00:11:52.075 }' 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.075 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.337 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.338 [2024-11-04 11:44:17.798760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.338 [2024-11-04 11:44:17.798901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.338 [2024-11-04 11:44:17.806790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.338 [2024-11-04 11:44:17.808728] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.338 [2024-11-04 11:44:17.808839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.338 [2024-11-04 11:44:17.808910] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:52.338 [2024-11-04 11:44:17.808981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:52.338 [2024-11-04 11:44:17.809027] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:52.338 [2024-11-04 11:44:17.809095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.338 "name": "Existed_Raid", 00:11:52.338 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:52.338 "strip_size_kb": 64, 00:11:52.338 "state": "configuring", 00:11:52.338 "raid_level": "concat", 00:11:52.338 "superblock": true, 00:11:52.338 "num_base_bdevs": 4, 00:11:52.338 "num_base_bdevs_discovered": 1, 00:11:52.338 "num_base_bdevs_operational": 4, 00:11:52.338 "base_bdevs_list": [ 00:11:52.338 { 00:11:52.338 "name": "BaseBdev1", 00:11:52.338 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:52.338 "is_configured": true, 00:11:52.338 "data_offset": 2048, 00:11:52.338 "data_size": 63488 00:11:52.338 }, 00:11:52.338 { 00:11:52.338 "name": "BaseBdev2", 00:11:52.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.338 "is_configured": false, 00:11:52.338 "data_offset": 0, 00:11:52.338 "data_size": 0 00:11:52.338 }, 00:11:52.338 { 00:11:52.338 "name": "BaseBdev3", 00:11:52.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.338 "is_configured": false, 00:11:52.338 "data_offset": 0, 00:11:52.338 "data_size": 0 00:11:52.338 }, 00:11:52.338 { 00:11:52.338 "name": "BaseBdev4", 00:11:52.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.338 "is_configured": false, 00:11:52.338 "data_offset": 0, 00:11:52.338 "data_size": 0 00:11:52.338 } 00:11:52.338 ] 00:11:52.338 }' 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.338 11:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.906 [2024-11-04 11:44:18.308086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.906 BaseBdev2 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.906 [ 00:11:52.906 { 00:11:52.906 "name": "BaseBdev2", 00:11:52.906 "aliases": [ 00:11:52.906 "b5704428-b86c-4191-9965-ab37cfcd256e" 00:11:52.906 ], 00:11:52.906 "product_name": "Malloc disk", 00:11:52.906 "block_size": 512, 00:11:52.906 "num_blocks": 65536, 00:11:52.906 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:52.906 "assigned_rate_limits": { 00:11:52.906 "rw_ios_per_sec": 0, 00:11:52.906 "rw_mbytes_per_sec": 0, 00:11:52.906 "r_mbytes_per_sec": 0, 00:11:52.906 "w_mbytes_per_sec": 0 00:11:52.906 }, 00:11:52.906 "claimed": true, 00:11:52.906 "claim_type": "exclusive_write", 00:11:52.906 "zoned": false, 00:11:52.906 "supported_io_types": { 00:11:52.906 "read": true, 00:11:52.906 "write": true, 00:11:52.906 "unmap": true, 00:11:52.906 "flush": true, 00:11:52.906 "reset": true, 00:11:52.906 "nvme_admin": false, 00:11:52.906 "nvme_io": false, 00:11:52.906 "nvme_io_md": false, 00:11:52.906 "write_zeroes": true, 00:11:52.906 "zcopy": true, 00:11:52.906 "get_zone_info": false, 00:11:52.906 "zone_management": false, 00:11:52.906 "zone_append": false, 00:11:52.906 "compare": false, 00:11:52.906 "compare_and_write": false, 00:11:52.906 "abort": true, 00:11:52.906 "seek_hole": false, 00:11:52.906 "seek_data": false, 00:11:52.906 "copy": true, 00:11:52.906 "nvme_iov_md": false 00:11:52.906 }, 00:11:52.906 "memory_domains": [ 00:11:52.906 { 00:11:52.906 "dma_device_id": "system", 00:11:52.906 "dma_device_type": 1 00:11:52.906 }, 00:11:52.906 { 00:11:52.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.906 "dma_device_type": 2 00:11:52.906 } 00:11:52.906 ], 00:11:52.906 "driver_specific": {} 00:11:52.906 } 00:11:52.906 ] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.906 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.906 "name": "Existed_Raid", 00:11:52.906 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:52.906 "strip_size_kb": 64, 00:11:52.906 "state": "configuring", 00:11:52.906 "raid_level": "concat", 00:11:52.906 "superblock": true, 00:11:52.906 "num_base_bdevs": 4, 00:11:52.906 "num_base_bdevs_discovered": 2, 00:11:52.906 "num_base_bdevs_operational": 4, 00:11:52.906 "base_bdevs_list": [ 00:11:52.906 { 00:11:52.906 "name": "BaseBdev1", 00:11:52.907 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:52.907 "is_configured": true, 00:11:52.907 "data_offset": 2048, 00:11:52.907 "data_size": 63488 00:11:52.907 }, 00:11:52.907 { 00:11:52.907 "name": "BaseBdev2", 00:11:52.907 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:52.907 "is_configured": true, 00:11:52.907 "data_offset": 2048, 00:11:52.907 "data_size": 63488 00:11:52.907 }, 00:11:52.907 { 00:11:52.907 "name": "BaseBdev3", 00:11:52.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.907 "is_configured": false, 00:11:52.907 "data_offset": 0, 00:11:52.907 "data_size": 0 00:11:52.907 }, 00:11:52.907 { 00:11:52.907 "name": "BaseBdev4", 00:11:52.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.907 "is_configured": false, 00:11:52.907 "data_offset": 0, 00:11:52.907 "data_size": 0 00:11:52.907 } 00:11:52.907 ] 00:11:52.907 }' 00:11:52.907 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.907 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 [2024-11-04 11:44:18.805635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.475 BaseBdev3 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 [ 00:11:53.475 { 00:11:53.475 "name": "BaseBdev3", 00:11:53.475 "aliases": [ 00:11:53.475 "2fc6fbf1-d073-4999-9150-754c9e569c35" 00:11:53.475 ], 00:11:53.475 "product_name": "Malloc disk", 00:11:53.475 "block_size": 512, 00:11:53.475 "num_blocks": 65536, 00:11:53.475 "uuid": "2fc6fbf1-d073-4999-9150-754c9e569c35", 00:11:53.475 "assigned_rate_limits": { 00:11:53.475 "rw_ios_per_sec": 0, 00:11:53.475 "rw_mbytes_per_sec": 0, 00:11:53.475 "r_mbytes_per_sec": 0, 00:11:53.475 "w_mbytes_per_sec": 0 00:11:53.475 }, 00:11:53.475 "claimed": true, 00:11:53.475 "claim_type": "exclusive_write", 00:11:53.475 "zoned": false, 00:11:53.475 "supported_io_types": { 00:11:53.475 "read": true, 00:11:53.475 "write": true, 00:11:53.475 "unmap": true, 00:11:53.475 "flush": true, 00:11:53.475 "reset": true, 00:11:53.475 "nvme_admin": false, 00:11:53.475 "nvme_io": false, 00:11:53.475 "nvme_io_md": false, 00:11:53.475 "write_zeroes": true, 00:11:53.475 "zcopy": true, 00:11:53.475 "get_zone_info": false, 00:11:53.475 "zone_management": false, 00:11:53.475 "zone_append": false, 00:11:53.475 "compare": false, 00:11:53.475 "compare_and_write": false, 00:11:53.475 "abort": true, 00:11:53.475 "seek_hole": false, 00:11:53.475 "seek_data": false, 00:11:53.475 "copy": true, 00:11:53.475 "nvme_iov_md": false 00:11:53.475 }, 00:11:53.475 "memory_domains": [ 00:11:53.475 { 00:11:53.475 "dma_device_id": "system", 00:11:53.475 "dma_device_type": 1 00:11:53.475 }, 00:11:53.475 { 00:11:53.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.475 "dma_device_type": 2 00:11:53.475 } 00:11:53.475 ], 00:11:53.475 "driver_specific": {} 00:11:53.475 } 00:11:53.475 ] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.475 "name": "Existed_Raid", 00:11:53.475 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:53.475 "strip_size_kb": 64, 00:11:53.475 "state": "configuring", 00:11:53.475 "raid_level": "concat", 00:11:53.475 "superblock": true, 00:11:53.475 "num_base_bdevs": 4, 00:11:53.475 "num_base_bdevs_discovered": 3, 00:11:53.475 "num_base_bdevs_operational": 4, 00:11:53.475 "base_bdevs_list": [ 00:11:53.475 { 00:11:53.475 "name": "BaseBdev1", 00:11:53.475 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:53.475 "is_configured": true, 00:11:53.475 "data_offset": 2048, 00:11:53.475 "data_size": 63488 00:11:53.475 }, 00:11:53.475 { 00:11:53.475 "name": "BaseBdev2", 00:11:53.475 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:53.475 "is_configured": true, 00:11:53.475 "data_offset": 2048, 00:11:53.475 "data_size": 63488 00:11:53.475 }, 00:11:53.475 { 00:11:53.475 "name": "BaseBdev3", 00:11:53.475 "uuid": "2fc6fbf1-d073-4999-9150-754c9e569c35", 00:11:53.475 "is_configured": true, 00:11:53.475 "data_offset": 2048, 00:11:53.475 "data_size": 63488 00:11:53.475 }, 00:11:53.475 { 00:11:53.475 "name": "BaseBdev4", 00:11:53.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.475 "is_configured": false, 00:11:53.475 "data_offset": 0, 00:11:53.475 "data_size": 0 00:11:53.475 } 00:11:53.475 ] 00:11:53.475 }' 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.475 11:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.043 [2024-11-04 11:44:19.350397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.043 [2024-11-04 11:44:19.350866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:54.043 [2024-11-04 11:44:19.350934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:54.043 [2024-11-04 11:44:19.351326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:54.043 BaseBdev4 00:11:54.043 [2024-11-04 11:44:19.351626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:54.043 [2024-11-04 11:44:19.351715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:54.043 [2024-11-04 11:44:19.351974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.043 [ 00:11:54.043 { 00:11:54.043 "name": "BaseBdev4", 00:11:54.043 "aliases": [ 00:11:54.043 "91f08e92-1cf4-4535-858b-bc9a6cd50c6a" 00:11:54.043 ], 00:11:54.043 "product_name": "Malloc disk", 00:11:54.043 "block_size": 512, 00:11:54.043 "num_blocks": 65536, 00:11:54.043 "uuid": "91f08e92-1cf4-4535-858b-bc9a6cd50c6a", 00:11:54.043 "assigned_rate_limits": { 00:11:54.043 "rw_ios_per_sec": 0, 00:11:54.043 "rw_mbytes_per_sec": 0, 00:11:54.043 "r_mbytes_per_sec": 0, 00:11:54.043 "w_mbytes_per_sec": 0 00:11:54.043 }, 00:11:54.043 "claimed": true, 00:11:54.043 "claim_type": "exclusive_write", 00:11:54.043 "zoned": false, 00:11:54.043 "supported_io_types": { 00:11:54.043 "read": true, 00:11:54.043 "write": true, 00:11:54.043 "unmap": true, 00:11:54.043 "flush": true, 00:11:54.043 "reset": true, 00:11:54.043 "nvme_admin": false, 00:11:54.043 "nvme_io": false, 00:11:54.043 "nvme_io_md": false, 00:11:54.043 "write_zeroes": true, 00:11:54.043 "zcopy": true, 00:11:54.043 "get_zone_info": false, 00:11:54.043 "zone_management": false, 00:11:54.043 "zone_append": false, 00:11:54.043 "compare": false, 00:11:54.043 "compare_and_write": false, 00:11:54.043 "abort": true, 00:11:54.043 "seek_hole": false, 00:11:54.043 "seek_data": false, 00:11:54.043 "copy": true, 00:11:54.043 "nvme_iov_md": false 00:11:54.043 }, 00:11:54.043 "memory_domains": [ 00:11:54.043 { 00:11:54.043 "dma_device_id": "system", 00:11:54.043 "dma_device_type": 1 00:11:54.043 }, 00:11:54.043 { 00:11:54.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.043 "dma_device_type": 2 00:11:54.043 } 00:11:54.043 ], 00:11:54.043 "driver_specific": {} 00:11:54.043 } 00:11:54.043 ] 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:54.043 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.044 "name": "Existed_Raid", 00:11:54.044 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:54.044 "strip_size_kb": 64, 00:11:54.044 "state": "online", 00:11:54.044 "raid_level": "concat", 00:11:54.044 "superblock": true, 00:11:54.044 "num_base_bdevs": 4, 00:11:54.044 "num_base_bdevs_discovered": 4, 00:11:54.044 "num_base_bdevs_operational": 4, 00:11:54.044 "base_bdevs_list": [ 00:11:54.044 { 00:11:54.044 "name": "BaseBdev1", 00:11:54.044 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:54.044 "is_configured": true, 00:11:54.044 "data_offset": 2048, 00:11:54.044 "data_size": 63488 00:11:54.044 }, 00:11:54.044 { 00:11:54.044 "name": "BaseBdev2", 00:11:54.044 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:54.044 "is_configured": true, 00:11:54.044 "data_offset": 2048, 00:11:54.044 "data_size": 63488 00:11:54.044 }, 00:11:54.044 { 00:11:54.044 "name": "BaseBdev3", 00:11:54.044 "uuid": "2fc6fbf1-d073-4999-9150-754c9e569c35", 00:11:54.044 "is_configured": true, 00:11:54.044 "data_offset": 2048, 00:11:54.044 "data_size": 63488 00:11:54.044 }, 00:11:54.044 { 00:11:54.044 "name": "BaseBdev4", 00:11:54.044 "uuid": "91f08e92-1cf4-4535-858b-bc9a6cd50c6a", 00:11:54.044 "is_configured": true, 00:11:54.044 "data_offset": 2048, 00:11:54.044 "data_size": 63488 00:11:54.044 } 00:11:54.044 ] 00:11:54.044 }' 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.044 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.612 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.612 [2024-11-04 11:44:19.838010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:54.613 "name": "Existed_Raid", 00:11:54.613 "aliases": [ 00:11:54.613 "ec13df75-ec15-4e57-a598-1ba4f0901463" 00:11:54.613 ], 00:11:54.613 "product_name": "Raid Volume", 00:11:54.613 "block_size": 512, 00:11:54.613 "num_blocks": 253952, 00:11:54.613 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:54.613 "assigned_rate_limits": { 00:11:54.613 "rw_ios_per_sec": 0, 00:11:54.613 "rw_mbytes_per_sec": 0, 00:11:54.613 "r_mbytes_per_sec": 0, 00:11:54.613 "w_mbytes_per_sec": 0 00:11:54.613 }, 00:11:54.613 "claimed": false, 00:11:54.613 "zoned": false, 00:11:54.613 "supported_io_types": { 00:11:54.613 "read": true, 00:11:54.613 "write": true, 00:11:54.613 "unmap": true, 00:11:54.613 "flush": true, 00:11:54.613 "reset": true, 00:11:54.613 "nvme_admin": false, 00:11:54.613 "nvme_io": false, 00:11:54.613 "nvme_io_md": false, 00:11:54.613 "write_zeroes": true, 00:11:54.613 "zcopy": false, 00:11:54.613 "get_zone_info": false, 00:11:54.613 "zone_management": false, 00:11:54.613 "zone_append": false, 00:11:54.613 "compare": false, 00:11:54.613 "compare_and_write": false, 00:11:54.613 "abort": false, 00:11:54.613 "seek_hole": false, 00:11:54.613 "seek_data": false, 00:11:54.613 "copy": false, 00:11:54.613 "nvme_iov_md": false 00:11:54.613 }, 00:11:54.613 "memory_domains": [ 00:11:54.613 { 00:11:54.613 "dma_device_id": "system", 00:11:54.613 "dma_device_type": 1 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.613 "dma_device_type": 2 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "system", 00:11:54.613 "dma_device_type": 1 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.613 "dma_device_type": 2 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "system", 00:11:54.613 "dma_device_type": 1 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.613 "dma_device_type": 2 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "system", 00:11:54.613 "dma_device_type": 1 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.613 "dma_device_type": 2 00:11:54.613 } 00:11:54.613 ], 00:11:54.613 "driver_specific": { 00:11:54.613 "raid": { 00:11:54.613 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:54.613 "strip_size_kb": 64, 00:11:54.613 "state": "online", 00:11:54.613 "raid_level": "concat", 00:11:54.613 "superblock": true, 00:11:54.613 "num_base_bdevs": 4, 00:11:54.613 "num_base_bdevs_discovered": 4, 00:11:54.613 "num_base_bdevs_operational": 4, 00:11:54.613 "base_bdevs_list": [ 00:11:54.613 { 00:11:54.613 "name": "BaseBdev1", 00:11:54.613 "uuid": "ea7ff9ee-f87b-4d67-b5af-ebb826a7eace", 00:11:54.613 "is_configured": true, 00:11:54.613 "data_offset": 2048, 00:11:54.613 "data_size": 63488 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "name": "BaseBdev2", 00:11:54.613 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:54.613 "is_configured": true, 00:11:54.613 "data_offset": 2048, 00:11:54.613 "data_size": 63488 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "name": "BaseBdev3", 00:11:54.613 "uuid": "2fc6fbf1-d073-4999-9150-754c9e569c35", 00:11:54.613 "is_configured": true, 00:11:54.613 "data_offset": 2048, 00:11:54.613 "data_size": 63488 00:11:54.613 }, 00:11:54.613 { 00:11:54.613 "name": "BaseBdev4", 00:11:54.613 "uuid": "91f08e92-1cf4-4535-858b-bc9a6cd50c6a", 00:11:54.613 "is_configured": true, 00:11:54.613 "data_offset": 2048, 00:11:54.613 "data_size": 63488 00:11:54.613 } 00:11:54.613 ] 00:11:54.613 } 00:11:54.613 } 00:11:54.613 }' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:54.613 BaseBdev2 00:11:54.613 BaseBdev3 00:11:54.613 BaseBdev4' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 11:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.613 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 [2024-11-04 11:44:20.165213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.874 [2024-11-04 11:44:20.165316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.874 [2024-11-04 11:44:20.165452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.874 "name": "Existed_Raid", 00:11:54.874 "uuid": "ec13df75-ec15-4e57-a598-1ba4f0901463", 00:11:54.874 "strip_size_kb": 64, 00:11:54.874 "state": "offline", 00:11:54.874 "raid_level": "concat", 00:11:54.874 "superblock": true, 00:11:54.874 "num_base_bdevs": 4, 00:11:54.874 "num_base_bdevs_discovered": 3, 00:11:54.874 "num_base_bdevs_operational": 3, 00:11:54.874 "base_bdevs_list": [ 00:11:54.874 { 00:11:54.874 "name": null, 00:11:54.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.874 "is_configured": false, 00:11:54.874 "data_offset": 0, 00:11:54.874 "data_size": 63488 00:11:54.874 }, 00:11:54.874 { 00:11:54.874 "name": "BaseBdev2", 00:11:54.874 "uuid": "b5704428-b86c-4191-9965-ab37cfcd256e", 00:11:54.874 "is_configured": true, 00:11:54.874 "data_offset": 2048, 00:11:54.874 "data_size": 63488 00:11:54.874 }, 00:11:54.874 { 00:11:54.874 "name": "BaseBdev3", 00:11:54.874 "uuid": "2fc6fbf1-d073-4999-9150-754c9e569c35", 00:11:54.874 "is_configured": true, 00:11:54.874 "data_offset": 2048, 00:11:54.874 "data_size": 63488 00:11:54.874 }, 00:11:54.874 { 00:11:54.874 "name": "BaseBdev4", 00:11:54.874 "uuid": "91f08e92-1cf4-4535-858b-bc9a6cd50c6a", 00:11:54.874 "is_configured": true, 00:11:54.874 "data_offset": 2048, 00:11:54.874 "data_size": 63488 00:11:54.874 } 00:11:54.874 ] 00:11:54.874 }' 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.874 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 [2024-11-04 11:44:20.750357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 11:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 [2024-11-04 11:44:20.903329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.704 [2024-11-04 11:44:21.060681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:55.704 [2024-11-04 11:44:21.060741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.704 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.965 BaseBdev2 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.965 [ 00:11:55.965 { 00:11:55.965 "name": "BaseBdev2", 00:11:55.965 "aliases": [ 00:11:55.965 "3653a295-2594-4206-9ec2-4967203fa3be" 00:11:55.965 ], 00:11:55.965 "product_name": "Malloc disk", 00:11:55.965 "block_size": 512, 00:11:55.965 "num_blocks": 65536, 00:11:55.965 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:55.965 "assigned_rate_limits": { 00:11:55.965 "rw_ios_per_sec": 0, 00:11:55.965 "rw_mbytes_per_sec": 0, 00:11:55.965 "r_mbytes_per_sec": 0, 00:11:55.965 "w_mbytes_per_sec": 0 00:11:55.965 }, 00:11:55.965 "claimed": false, 00:11:55.965 "zoned": false, 00:11:55.965 "supported_io_types": { 00:11:55.965 "read": true, 00:11:55.965 "write": true, 00:11:55.965 "unmap": true, 00:11:55.965 "flush": true, 00:11:55.965 "reset": true, 00:11:55.965 "nvme_admin": false, 00:11:55.965 "nvme_io": false, 00:11:55.965 "nvme_io_md": false, 00:11:55.965 "write_zeroes": true, 00:11:55.965 "zcopy": true, 00:11:55.965 "get_zone_info": false, 00:11:55.965 "zone_management": false, 00:11:55.965 "zone_append": false, 00:11:55.965 "compare": false, 00:11:55.965 "compare_and_write": false, 00:11:55.965 "abort": true, 00:11:55.965 "seek_hole": false, 00:11:55.965 "seek_data": false, 00:11:55.965 "copy": true, 00:11:55.965 "nvme_iov_md": false 00:11:55.965 }, 00:11:55.965 "memory_domains": [ 00:11:55.965 { 00:11:55.965 "dma_device_id": "system", 00:11:55.965 "dma_device_type": 1 00:11:55.965 }, 00:11:55.965 { 00:11:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.965 "dma_device_type": 2 00:11:55.965 } 00:11:55.965 ], 00:11:55.965 "driver_specific": {} 00:11:55.965 } 00:11:55.965 ] 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.965 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.965 BaseBdev3 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 [ 00:11:55.966 { 00:11:55.966 "name": "BaseBdev3", 00:11:55.966 "aliases": [ 00:11:55.966 "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07" 00:11:55.966 ], 00:11:55.966 "product_name": "Malloc disk", 00:11:55.966 "block_size": 512, 00:11:55.966 "num_blocks": 65536, 00:11:55.966 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:55.966 "assigned_rate_limits": { 00:11:55.966 "rw_ios_per_sec": 0, 00:11:55.966 "rw_mbytes_per_sec": 0, 00:11:55.966 "r_mbytes_per_sec": 0, 00:11:55.966 "w_mbytes_per_sec": 0 00:11:55.966 }, 00:11:55.966 "claimed": false, 00:11:55.966 "zoned": false, 00:11:55.966 "supported_io_types": { 00:11:55.966 "read": true, 00:11:55.966 "write": true, 00:11:55.966 "unmap": true, 00:11:55.966 "flush": true, 00:11:55.966 "reset": true, 00:11:55.966 "nvme_admin": false, 00:11:55.966 "nvme_io": false, 00:11:55.966 "nvme_io_md": false, 00:11:55.966 "write_zeroes": true, 00:11:55.966 "zcopy": true, 00:11:55.966 "get_zone_info": false, 00:11:55.966 "zone_management": false, 00:11:55.966 "zone_append": false, 00:11:55.966 "compare": false, 00:11:55.966 "compare_and_write": false, 00:11:55.966 "abort": true, 00:11:55.966 "seek_hole": false, 00:11:55.966 "seek_data": false, 00:11:55.966 "copy": true, 00:11:55.966 "nvme_iov_md": false 00:11:55.966 }, 00:11:55.966 "memory_domains": [ 00:11:55.966 { 00:11:55.966 "dma_device_id": "system", 00:11:55.966 "dma_device_type": 1 00:11:55.966 }, 00:11:55.966 { 00:11:55.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.966 "dma_device_type": 2 00:11:55.966 } 00:11:55.966 ], 00:11:55.966 "driver_specific": {} 00:11:55.966 } 00:11:55.966 ] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 BaseBdev4 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 [ 00:11:55.966 { 00:11:55.966 "name": "BaseBdev4", 00:11:55.966 "aliases": [ 00:11:55.966 "da70a5ac-1f41-4f8e-a7fb-da57780bef44" 00:11:55.966 ], 00:11:55.966 "product_name": "Malloc disk", 00:11:55.966 "block_size": 512, 00:11:55.966 "num_blocks": 65536, 00:11:55.966 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:55.966 "assigned_rate_limits": { 00:11:55.966 "rw_ios_per_sec": 0, 00:11:55.966 "rw_mbytes_per_sec": 0, 00:11:55.966 "r_mbytes_per_sec": 0, 00:11:55.966 "w_mbytes_per_sec": 0 00:11:55.966 }, 00:11:55.966 "claimed": false, 00:11:55.966 "zoned": false, 00:11:55.966 "supported_io_types": { 00:11:55.966 "read": true, 00:11:55.966 "write": true, 00:11:55.966 "unmap": true, 00:11:55.966 "flush": true, 00:11:55.966 "reset": true, 00:11:55.966 "nvme_admin": false, 00:11:55.966 "nvme_io": false, 00:11:55.966 "nvme_io_md": false, 00:11:55.966 "write_zeroes": true, 00:11:55.966 "zcopy": true, 00:11:55.966 "get_zone_info": false, 00:11:55.966 "zone_management": false, 00:11:55.966 "zone_append": false, 00:11:55.966 "compare": false, 00:11:55.966 "compare_and_write": false, 00:11:55.966 "abort": true, 00:11:55.966 "seek_hole": false, 00:11:55.966 "seek_data": false, 00:11:55.966 "copy": true, 00:11:55.966 "nvme_iov_md": false 00:11:55.966 }, 00:11:55.966 "memory_domains": [ 00:11:55.966 { 00:11:55.966 "dma_device_id": "system", 00:11:55.966 "dma_device_type": 1 00:11:55.966 }, 00:11:55.966 { 00:11:55.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.966 "dma_device_type": 2 00:11:55.966 } 00:11:55.966 ], 00:11:55.966 "driver_specific": {} 00:11:55.966 } 00:11:55.966 ] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.966 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.966 [2024-11-04 11:44:21.482619] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.966 [2024-11-04 11:44:21.482753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.966 [2024-11-04 11:44:21.482839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.966 [2024-11-04 11:44:21.484976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.226 [2024-11-04 11:44:21.485126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.226 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.227 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.227 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.227 "name": "Existed_Raid", 00:11:56.227 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:56.227 "strip_size_kb": 64, 00:11:56.227 "state": "configuring", 00:11:56.227 "raid_level": "concat", 00:11:56.227 "superblock": true, 00:11:56.227 "num_base_bdevs": 4, 00:11:56.227 "num_base_bdevs_discovered": 3, 00:11:56.227 "num_base_bdevs_operational": 4, 00:11:56.227 "base_bdevs_list": [ 00:11:56.227 { 00:11:56.227 "name": "BaseBdev1", 00:11:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.227 "is_configured": false, 00:11:56.227 "data_offset": 0, 00:11:56.227 "data_size": 0 00:11:56.227 }, 00:11:56.227 { 00:11:56.227 "name": "BaseBdev2", 00:11:56.227 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:56.227 "is_configured": true, 00:11:56.227 "data_offset": 2048, 00:11:56.227 "data_size": 63488 00:11:56.227 }, 00:11:56.227 { 00:11:56.227 "name": "BaseBdev3", 00:11:56.227 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:56.227 "is_configured": true, 00:11:56.227 "data_offset": 2048, 00:11:56.227 "data_size": 63488 00:11:56.227 }, 00:11:56.227 { 00:11:56.227 "name": "BaseBdev4", 00:11:56.227 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:56.227 "is_configured": true, 00:11:56.227 "data_offset": 2048, 00:11:56.227 "data_size": 63488 00:11:56.227 } 00:11:56.227 ] 00:11:56.227 }' 00:11:56.227 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.227 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.486 [2024-11-04 11:44:21.897947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.486 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.487 "name": "Existed_Raid", 00:11:56.487 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:56.487 "strip_size_kb": 64, 00:11:56.487 "state": "configuring", 00:11:56.487 "raid_level": "concat", 00:11:56.487 "superblock": true, 00:11:56.487 "num_base_bdevs": 4, 00:11:56.487 "num_base_bdevs_discovered": 2, 00:11:56.487 "num_base_bdevs_operational": 4, 00:11:56.487 "base_bdevs_list": [ 00:11:56.487 { 00:11:56.487 "name": "BaseBdev1", 00:11:56.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.487 "is_configured": false, 00:11:56.487 "data_offset": 0, 00:11:56.487 "data_size": 0 00:11:56.487 }, 00:11:56.487 { 00:11:56.487 "name": null, 00:11:56.487 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:56.487 "is_configured": false, 00:11:56.487 "data_offset": 0, 00:11:56.487 "data_size": 63488 00:11:56.487 }, 00:11:56.487 { 00:11:56.487 "name": "BaseBdev3", 00:11:56.487 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:56.487 "is_configured": true, 00:11:56.487 "data_offset": 2048, 00:11:56.487 "data_size": 63488 00:11:56.487 }, 00:11:56.487 { 00:11:56.487 "name": "BaseBdev4", 00:11:56.487 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:56.487 "is_configured": true, 00:11:56.487 "data_offset": 2048, 00:11:56.487 "data_size": 63488 00:11:56.487 } 00:11:56.487 ] 00:11:56.487 }' 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.487 11:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.102 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.102 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.103 [2024-11-04 11:44:22.378079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.103 BaseBdev1 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.103 [ 00:11:57.103 { 00:11:57.103 "name": "BaseBdev1", 00:11:57.103 "aliases": [ 00:11:57.103 "d7675f2f-9237-4579-b792-7207d68e4ac9" 00:11:57.103 ], 00:11:57.103 "product_name": "Malloc disk", 00:11:57.103 "block_size": 512, 00:11:57.103 "num_blocks": 65536, 00:11:57.103 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:57.103 "assigned_rate_limits": { 00:11:57.103 "rw_ios_per_sec": 0, 00:11:57.103 "rw_mbytes_per_sec": 0, 00:11:57.103 "r_mbytes_per_sec": 0, 00:11:57.103 "w_mbytes_per_sec": 0 00:11:57.103 }, 00:11:57.103 "claimed": true, 00:11:57.103 "claim_type": "exclusive_write", 00:11:57.103 "zoned": false, 00:11:57.103 "supported_io_types": { 00:11:57.103 "read": true, 00:11:57.103 "write": true, 00:11:57.103 "unmap": true, 00:11:57.103 "flush": true, 00:11:57.103 "reset": true, 00:11:57.103 "nvme_admin": false, 00:11:57.103 "nvme_io": false, 00:11:57.103 "nvme_io_md": false, 00:11:57.103 "write_zeroes": true, 00:11:57.103 "zcopy": true, 00:11:57.103 "get_zone_info": false, 00:11:57.103 "zone_management": false, 00:11:57.103 "zone_append": false, 00:11:57.103 "compare": false, 00:11:57.103 "compare_and_write": false, 00:11:57.103 "abort": true, 00:11:57.103 "seek_hole": false, 00:11:57.103 "seek_data": false, 00:11:57.103 "copy": true, 00:11:57.103 "nvme_iov_md": false 00:11:57.103 }, 00:11:57.103 "memory_domains": [ 00:11:57.103 { 00:11:57.103 "dma_device_id": "system", 00:11:57.103 "dma_device_type": 1 00:11:57.103 }, 00:11:57.103 { 00:11:57.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.103 "dma_device_type": 2 00:11:57.103 } 00:11:57.103 ], 00:11:57.103 "driver_specific": {} 00:11:57.103 } 00:11:57.103 ] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.103 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.103 "name": "Existed_Raid", 00:11:57.103 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:57.103 "strip_size_kb": 64, 00:11:57.103 "state": "configuring", 00:11:57.103 "raid_level": "concat", 00:11:57.103 "superblock": true, 00:11:57.103 "num_base_bdevs": 4, 00:11:57.103 "num_base_bdevs_discovered": 3, 00:11:57.103 "num_base_bdevs_operational": 4, 00:11:57.103 "base_bdevs_list": [ 00:11:57.103 { 00:11:57.103 "name": "BaseBdev1", 00:11:57.103 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:57.103 "is_configured": true, 00:11:57.103 "data_offset": 2048, 00:11:57.103 "data_size": 63488 00:11:57.103 }, 00:11:57.103 { 00:11:57.103 "name": null, 00:11:57.103 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:57.104 "is_configured": false, 00:11:57.104 "data_offset": 0, 00:11:57.104 "data_size": 63488 00:11:57.104 }, 00:11:57.104 { 00:11:57.104 "name": "BaseBdev3", 00:11:57.104 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:57.104 "is_configured": true, 00:11:57.104 "data_offset": 2048, 00:11:57.104 "data_size": 63488 00:11:57.104 }, 00:11:57.104 { 00:11:57.104 "name": "BaseBdev4", 00:11:57.104 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:57.104 "is_configured": true, 00:11:57.104 "data_offset": 2048, 00:11:57.104 "data_size": 63488 00:11:57.104 } 00:11:57.104 ] 00:11:57.104 }' 00:11:57.104 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.104 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.382 [2024-11-04 11:44:22.893378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.382 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.642 "name": "Existed_Raid", 00:11:57.642 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:57.642 "strip_size_kb": 64, 00:11:57.642 "state": "configuring", 00:11:57.642 "raid_level": "concat", 00:11:57.642 "superblock": true, 00:11:57.642 "num_base_bdevs": 4, 00:11:57.642 "num_base_bdevs_discovered": 2, 00:11:57.642 "num_base_bdevs_operational": 4, 00:11:57.642 "base_bdevs_list": [ 00:11:57.642 { 00:11:57.642 "name": "BaseBdev1", 00:11:57.642 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:57.642 "is_configured": true, 00:11:57.642 "data_offset": 2048, 00:11:57.642 "data_size": 63488 00:11:57.642 }, 00:11:57.642 { 00:11:57.642 "name": null, 00:11:57.642 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:57.642 "is_configured": false, 00:11:57.642 "data_offset": 0, 00:11:57.642 "data_size": 63488 00:11:57.642 }, 00:11:57.642 { 00:11:57.642 "name": null, 00:11:57.642 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:57.642 "is_configured": false, 00:11:57.642 "data_offset": 0, 00:11:57.642 "data_size": 63488 00:11:57.642 }, 00:11:57.642 { 00:11:57.642 "name": "BaseBdev4", 00:11:57.642 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:57.642 "is_configured": true, 00:11:57.642 "data_offset": 2048, 00:11:57.642 "data_size": 63488 00:11:57.642 } 00:11:57.642 ] 00:11:57.642 }' 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.642 11:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.901 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.901 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.901 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.901 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.901 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.160 [2024-11-04 11:44:23.444487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.160 "name": "Existed_Raid", 00:11:58.160 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:58.160 "strip_size_kb": 64, 00:11:58.160 "state": "configuring", 00:11:58.160 "raid_level": "concat", 00:11:58.160 "superblock": true, 00:11:58.160 "num_base_bdevs": 4, 00:11:58.160 "num_base_bdevs_discovered": 3, 00:11:58.160 "num_base_bdevs_operational": 4, 00:11:58.160 "base_bdevs_list": [ 00:11:58.160 { 00:11:58.160 "name": "BaseBdev1", 00:11:58.160 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:58.160 "is_configured": true, 00:11:58.160 "data_offset": 2048, 00:11:58.160 "data_size": 63488 00:11:58.160 }, 00:11:58.160 { 00:11:58.160 "name": null, 00:11:58.160 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:58.160 "is_configured": false, 00:11:58.160 "data_offset": 0, 00:11:58.160 "data_size": 63488 00:11:58.160 }, 00:11:58.160 { 00:11:58.160 "name": "BaseBdev3", 00:11:58.160 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:58.160 "is_configured": true, 00:11:58.160 "data_offset": 2048, 00:11:58.160 "data_size": 63488 00:11:58.160 }, 00:11:58.160 { 00:11:58.160 "name": "BaseBdev4", 00:11:58.160 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:58.160 "is_configured": true, 00:11:58.160 "data_offset": 2048, 00:11:58.160 "data_size": 63488 00:11:58.160 } 00:11:58.160 ] 00:11:58.160 }' 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.160 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.419 11:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.419 [2024-11-04 11:44:23.911708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.677 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.678 "name": "Existed_Raid", 00:11:58.678 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:58.678 "strip_size_kb": 64, 00:11:58.678 "state": "configuring", 00:11:58.678 "raid_level": "concat", 00:11:58.678 "superblock": true, 00:11:58.678 "num_base_bdevs": 4, 00:11:58.678 "num_base_bdevs_discovered": 2, 00:11:58.678 "num_base_bdevs_operational": 4, 00:11:58.678 "base_bdevs_list": [ 00:11:58.678 { 00:11:58.678 "name": null, 00:11:58.678 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:58.678 "is_configured": false, 00:11:58.678 "data_offset": 0, 00:11:58.678 "data_size": 63488 00:11:58.678 }, 00:11:58.678 { 00:11:58.678 "name": null, 00:11:58.678 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:58.678 "is_configured": false, 00:11:58.678 "data_offset": 0, 00:11:58.678 "data_size": 63488 00:11:58.678 }, 00:11:58.678 { 00:11:58.678 "name": "BaseBdev3", 00:11:58.678 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:58.678 "is_configured": true, 00:11:58.678 "data_offset": 2048, 00:11:58.678 "data_size": 63488 00:11:58.678 }, 00:11:58.678 { 00:11:58.678 "name": "BaseBdev4", 00:11:58.678 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:58.678 "is_configured": true, 00:11:58.678 "data_offset": 2048, 00:11:58.678 "data_size": 63488 00:11:58.678 } 00:11:58.678 ] 00:11:58.678 }' 00:11:58.678 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.678 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.937 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.937 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.937 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.937 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.195 [2024-11-04 11:44:24.499755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.195 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.195 "name": "Existed_Raid", 00:11:59.195 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:59.195 "strip_size_kb": 64, 00:11:59.195 "state": "configuring", 00:11:59.195 "raid_level": "concat", 00:11:59.196 "superblock": true, 00:11:59.196 "num_base_bdevs": 4, 00:11:59.196 "num_base_bdevs_discovered": 3, 00:11:59.196 "num_base_bdevs_operational": 4, 00:11:59.196 "base_bdevs_list": [ 00:11:59.196 { 00:11:59.196 "name": null, 00:11:59.196 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:59.196 "is_configured": false, 00:11:59.196 "data_offset": 0, 00:11:59.196 "data_size": 63488 00:11:59.196 }, 00:11:59.196 { 00:11:59.196 "name": "BaseBdev2", 00:11:59.196 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 }, 00:11:59.196 { 00:11:59.196 "name": "BaseBdev3", 00:11:59.196 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 }, 00:11:59.196 { 00:11:59.196 "name": "BaseBdev4", 00:11:59.196 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:59.196 "is_configured": true, 00:11:59.196 "data_offset": 2048, 00:11:59.196 "data_size": 63488 00:11:59.196 } 00:11:59.196 ] 00:11:59.196 }' 00:11:59.196 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.196 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.455 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.455 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.455 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.455 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.455 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 11:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7675f2f-9237-4579-b792-7207d68e4ac9 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 [2024-11-04 11:44:25.059642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:59.714 [2024-11-04 11:44:25.060159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:59.714 [2024-11-04 11:44:25.060235] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:59.714 [2024-11-04 11:44:25.060669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:59.714 NewBaseBdev 00:11:59.714 [2024-11-04 11:44:25.060942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:59.714 [2024-11-04 11:44:25.060965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:59.714 [2024-11-04 11:44:25.061179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.714 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 [ 00:11:59.714 { 00:11:59.714 "name": "NewBaseBdev", 00:11:59.714 "aliases": [ 00:11:59.714 "d7675f2f-9237-4579-b792-7207d68e4ac9" 00:11:59.714 ], 00:11:59.714 "product_name": "Malloc disk", 00:11:59.714 "block_size": 512, 00:11:59.714 "num_blocks": 65536, 00:11:59.714 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:59.714 "assigned_rate_limits": { 00:11:59.714 "rw_ios_per_sec": 0, 00:11:59.714 "rw_mbytes_per_sec": 0, 00:11:59.714 "r_mbytes_per_sec": 0, 00:11:59.714 "w_mbytes_per_sec": 0 00:11:59.714 }, 00:11:59.714 "claimed": true, 00:11:59.714 "claim_type": "exclusive_write", 00:11:59.714 "zoned": false, 00:11:59.714 "supported_io_types": { 00:11:59.714 "read": true, 00:11:59.714 "write": true, 00:11:59.714 "unmap": true, 00:11:59.715 "flush": true, 00:11:59.715 "reset": true, 00:11:59.715 "nvme_admin": false, 00:11:59.715 "nvme_io": false, 00:11:59.715 "nvme_io_md": false, 00:11:59.715 "write_zeroes": true, 00:11:59.715 "zcopy": true, 00:11:59.715 "get_zone_info": false, 00:11:59.715 "zone_management": false, 00:11:59.715 "zone_append": false, 00:11:59.715 "compare": false, 00:11:59.715 "compare_and_write": false, 00:11:59.715 "abort": true, 00:11:59.715 "seek_hole": false, 00:11:59.715 "seek_data": false, 00:11:59.715 "copy": true, 00:11:59.715 "nvme_iov_md": false 00:11:59.715 }, 00:11:59.715 "memory_domains": [ 00:11:59.715 { 00:11:59.715 "dma_device_id": "system", 00:11:59.715 "dma_device_type": 1 00:11:59.715 }, 00:11:59.715 { 00:11:59.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.715 "dma_device_type": 2 00:11:59.715 } 00:11:59.715 ], 00:11:59.715 "driver_specific": {} 00:11:59.715 } 00:11:59.715 ] 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.715 "name": "Existed_Raid", 00:11:59.715 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:11:59.715 "strip_size_kb": 64, 00:11:59.715 "state": "online", 00:11:59.715 "raid_level": "concat", 00:11:59.715 "superblock": true, 00:11:59.715 "num_base_bdevs": 4, 00:11:59.715 "num_base_bdevs_discovered": 4, 00:11:59.715 "num_base_bdevs_operational": 4, 00:11:59.715 "base_bdevs_list": [ 00:11:59.715 { 00:11:59.715 "name": "NewBaseBdev", 00:11:59.715 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:11:59.715 "is_configured": true, 00:11:59.715 "data_offset": 2048, 00:11:59.715 "data_size": 63488 00:11:59.715 }, 00:11:59.715 { 00:11:59.715 "name": "BaseBdev2", 00:11:59.715 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:11:59.715 "is_configured": true, 00:11:59.715 "data_offset": 2048, 00:11:59.715 "data_size": 63488 00:11:59.715 }, 00:11:59.715 { 00:11:59.715 "name": "BaseBdev3", 00:11:59.715 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:11:59.715 "is_configured": true, 00:11:59.715 "data_offset": 2048, 00:11:59.715 "data_size": 63488 00:11:59.715 }, 00:11:59.715 { 00:11:59.715 "name": "BaseBdev4", 00:11:59.715 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:11:59.715 "is_configured": true, 00:11:59.715 "data_offset": 2048, 00:11:59.715 "data_size": 63488 00:11:59.715 } 00:11:59.715 ] 00:11:59.715 }' 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.715 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.975 [2024-11-04 11:44:25.479377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.975 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.235 "name": "Existed_Raid", 00:12:00.235 "aliases": [ 00:12:00.235 "e0db805a-359c-45cc-8829-c71be7c79d6b" 00:12:00.235 ], 00:12:00.235 "product_name": "Raid Volume", 00:12:00.235 "block_size": 512, 00:12:00.235 "num_blocks": 253952, 00:12:00.235 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:12:00.235 "assigned_rate_limits": { 00:12:00.235 "rw_ios_per_sec": 0, 00:12:00.235 "rw_mbytes_per_sec": 0, 00:12:00.235 "r_mbytes_per_sec": 0, 00:12:00.235 "w_mbytes_per_sec": 0 00:12:00.235 }, 00:12:00.235 "claimed": false, 00:12:00.235 "zoned": false, 00:12:00.235 "supported_io_types": { 00:12:00.235 "read": true, 00:12:00.235 "write": true, 00:12:00.235 "unmap": true, 00:12:00.235 "flush": true, 00:12:00.235 "reset": true, 00:12:00.235 "nvme_admin": false, 00:12:00.235 "nvme_io": false, 00:12:00.235 "nvme_io_md": false, 00:12:00.235 "write_zeroes": true, 00:12:00.235 "zcopy": false, 00:12:00.235 "get_zone_info": false, 00:12:00.235 "zone_management": false, 00:12:00.235 "zone_append": false, 00:12:00.235 "compare": false, 00:12:00.235 "compare_and_write": false, 00:12:00.235 "abort": false, 00:12:00.235 "seek_hole": false, 00:12:00.235 "seek_data": false, 00:12:00.235 "copy": false, 00:12:00.235 "nvme_iov_md": false 00:12:00.235 }, 00:12:00.235 "memory_domains": [ 00:12:00.235 { 00:12:00.235 "dma_device_id": "system", 00:12:00.235 "dma_device_type": 1 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.235 "dma_device_type": 2 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "system", 00:12:00.235 "dma_device_type": 1 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.235 "dma_device_type": 2 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "system", 00:12:00.235 "dma_device_type": 1 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.235 "dma_device_type": 2 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "system", 00:12:00.235 "dma_device_type": 1 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.235 "dma_device_type": 2 00:12:00.235 } 00:12:00.235 ], 00:12:00.235 "driver_specific": { 00:12:00.235 "raid": { 00:12:00.235 "uuid": "e0db805a-359c-45cc-8829-c71be7c79d6b", 00:12:00.235 "strip_size_kb": 64, 00:12:00.235 "state": "online", 00:12:00.235 "raid_level": "concat", 00:12:00.235 "superblock": true, 00:12:00.235 "num_base_bdevs": 4, 00:12:00.235 "num_base_bdevs_discovered": 4, 00:12:00.235 "num_base_bdevs_operational": 4, 00:12:00.235 "base_bdevs_list": [ 00:12:00.235 { 00:12:00.235 "name": "NewBaseBdev", 00:12:00.235 "uuid": "d7675f2f-9237-4579-b792-7207d68e4ac9", 00:12:00.235 "is_configured": true, 00:12:00.235 "data_offset": 2048, 00:12:00.235 "data_size": 63488 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "name": "BaseBdev2", 00:12:00.235 "uuid": "3653a295-2594-4206-9ec2-4967203fa3be", 00:12:00.235 "is_configured": true, 00:12:00.235 "data_offset": 2048, 00:12:00.235 "data_size": 63488 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "name": "BaseBdev3", 00:12:00.235 "uuid": "d791d2a9-242f-4ea0-bac0-bd5a36a7aa07", 00:12:00.235 "is_configured": true, 00:12:00.235 "data_offset": 2048, 00:12:00.235 "data_size": 63488 00:12:00.235 }, 00:12:00.235 { 00:12:00.235 "name": "BaseBdev4", 00:12:00.235 "uuid": "da70a5ac-1f41-4f8e-a7fb-da57780bef44", 00:12:00.235 "is_configured": true, 00:12:00.235 "data_offset": 2048, 00:12:00.235 "data_size": 63488 00:12:00.235 } 00:12:00.235 ] 00:12:00.235 } 00:12:00.235 } 00:12:00.235 }' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:00.235 BaseBdev2 00:12:00.235 BaseBdev3 00:12:00.235 BaseBdev4' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.235 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.236 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.495 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.496 [2024-11-04 11:44:25.846413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.496 [2024-11-04 11:44:25.846455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.496 [2024-11-04 11:44:25.846560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.496 [2024-11-04 11:44:25.846644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.496 [2024-11-04 11:44:25.846657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72186 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72186 ']' 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72186 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72186 00:12:00.496 killing process with pid 72186 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72186' 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72186 00:12:00.496 [2024-11-04 11:44:25.897339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.496 11:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72186 00:12:01.065 [2024-11-04 11:44:26.320218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.003 11:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:02.004 00:12:02.004 real 0m11.556s 00:12:02.004 user 0m18.182s 00:12:02.004 sys 0m2.033s 00:12:02.004 ************************************ 00:12:02.004 END TEST raid_state_function_test_sb 00:12:02.004 ************************************ 00:12:02.004 11:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:02.004 11:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 11:44:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:02.263 11:44:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:02.263 11:44:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:02.263 11:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 ************************************ 00:12:02.263 START TEST raid_superblock_test 00:12:02.263 ************************************ 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72862 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72862 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72862 ']' 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:02.263 11:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.263 [2024-11-04 11:44:27.664156] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:02.263 [2024-11-04 11:44:27.664293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:12:02.522 [2024-11-04 11:44:27.841904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.522 [2024-11-04 11:44:27.961467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.782 [2024-11-04 11:44:28.183395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.782 [2024-11-04 11:44:28.183450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.042 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 malloc1 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 [2024-11-04 11:44:28.571553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:03.303 [2024-11-04 11:44:28.571640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.303 [2024-11-04 11:44:28.571675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.303 [2024-11-04 11:44:28.571689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.303 [2024-11-04 11:44:28.574213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.303 [2024-11-04 11:44:28.574263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:03.303 pt1 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 malloc2 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 [2024-11-04 11:44:28.631096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.303 [2024-11-04 11:44:28.631249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.303 [2024-11-04 11:44:28.631328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.303 [2024-11-04 11:44:28.631380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.303 [2024-11-04 11:44:28.633878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.303 [2024-11-04 11:44:28.633976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.303 pt2 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 malloc3 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 [2024-11-04 11:44:28.706449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:03.303 [2024-11-04 11:44:28.706600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.303 [2024-11-04 11:44:28.706669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.303 [2024-11-04 11:44:28.706731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.303 [2024-11-04 11:44:28.709327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.303 [2024-11-04 11:44:28.709445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:03.303 pt3 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 malloc4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 [2024-11-04 11:44:28.765546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:03.303 [2024-11-04 11:44:28.765716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.303 [2024-11-04 11:44:28.765770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.303 [2024-11-04 11:44:28.765785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.303 [2024-11-04 11:44:28.768300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.303 [2024-11-04 11:44:28.768393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:03.303 pt4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.303 [2024-11-04 11:44:28.777534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:03.303 [2024-11-04 11:44:28.779569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.303 [2024-11-04 11:44:28.779646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:03.303 [2024-11-04 11:44:28.779725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:03.303 [2024-11-04 11:44:28.779950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.303 [2024-11-04 11:44:28.779964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:03.303 [2024-11-04 11:44:28.780310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.303 [2024-11-04 11:44:28.780538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.303 [2024-11-04 11:44:28.780556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.303 [2024-11-04 11:44:28.780745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.303 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.304 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.563 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.563 "name": "raid_bdev1", 00:12:03.563 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:03.563 "strip_size_kb": 64, 00:12:03.563 "state": "online", 00:12:03.563 "raid_level": "concat", 00:12:03.563 "superblock": true, 00:12:03.563 "num_base_bdevs": 4, 00:12:03.563 "num_base_bdevs_discovered": 4, 00:12:03.563 "num_base_bdevs_operational": 4, 00:12:03.563 "base_bdevs_list": [ 00:12:03.563 { 00:12:03.563 "name": "pt1", 00:12:03.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 }, 00:12:03.563 { 00:12:03.563 "name": "pt2", 00:12:03.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 }, 00:12:03.563 { 00:12:03.563 "name": "pt3", 00:12:03.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 }, 00:12:03.563 { 00:12:03.563 "name": "pt4", 00:12:03.563 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.563 "is_configured": true, 00:12:03.563 "data_offset": 2048, 00:12:03.563 "data_size": 63488 00:12:03.563 } 00:12:03.563 ] 00:12:03.563 }' 00:12:03.563 11:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.563 11:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.823 [2024-11-04 11:44:29.257090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.823 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.823 "name": "raid_bdev1", 00:12:03.823 "aliases": [ 00:12:03.823 "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12" 00:12:03.823 ], 00:12:03.823 "product_name": "Raid Volume", 00:12:03.823 "block_size": 512, 00:12:03.823 "num_blocks": 253952, 00:12:03.823 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:03.823 "assigned_rate_limits": { 00:12:03.823 "rw_ios_per_sec": 0, 00:12:03.823 "rw_mbytes_per_sec": 0, 00:12:03.823 "r_mbytes_per_sec": 0, 00:12:03.823 "w_mbytes_per_sec": 0 00:12:03.823 }, 00:12:03.823 "claimed": false, 00:12:03.823 "zoned": false, 00:12:03.823 "supported_io_types": { 00:12:03.823 "read": true, 00:12:03.823 "write": true, 00:12:03.823 "unmap": true, 00:12:03.823 "flush": true, 00:12:03.823 "reset": true, 00:12:03.823 "nvme_admin": false, 00:12:03.823 "nvme_io": false, 00:12:03.823 "nvme_io_md": false, 00:12:03.823 "write_zeroes": true, 00:12:03.823 "zcopy": false, 00:12:03.823 "get_zone_info": false, 00:12:03.823 "zone_management": false, 00:12:03.823 "zone_append": false, 00:12:03.823 "compare": false, 00:12:03.823 "compare_and_write": false, 00:12:03.823 "abort": false, 00:12:03.823 "seek_hole": false, 00:12:03.823 "seek_data": false, 00:12:03.823 "copy": false, 00:12:03.823 "nvme_iov_md": false 00:12:03.823 }, 00:12:03.823 "memory_domains": [ 00:12:03.823 { 00:12:03.823 "dma_device_id": "system", 00:12:03.823 "dma_device_type": 1 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.823 "dma_device_type": 2 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "system", 00:12:03.823 "dma_device_type": 1 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.823 "dma_device_type": 2 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "system", 00:12:03.823 "dma_device_type": 1 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.823 "dma_device_type": 2 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "system", 00:12:03.823 "dma_device_type": 1 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.823 "dma_device_type": 2 00:12:03.823 } 00:12:03.823 ], 00:12:03.823 "driver_specific": { 00:12:03.823 "raid": { 00:12:03.823 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:03.823 "strip_size_kb": 64, 00:12:03.823 "state": "online", 00:12:03.823 "raid_level": "concat", 00:12:03.823 "superblock": true, 00:12:03.823 "num_base_bdevs": 4, 00:12:03.823 "num_base_bdevs_discovered": 4, 00:12:03.823 "num_base_bdevs_operational": 4, 00:12:03.823 "base_bdevs_list": [ 00:12:03.823 { 00:12:03.823 "name": "pt1", 00:12:03.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.823 "is_configured": true, 00:12:03.823 "data_offset": 2048, 00:12:03.823 "data_size": 63488 00:12:03.823 }, 00:12:03.823 { 00:12:03.823 "name": "pt2", 00:12:03.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.823 "is_configured": true, 00:12:03.824 "data_offset": 2048, 00:12:03.824 "data_size": 63488 00:12:03.824 }, 00:12:03.824 { 00:12:03.824 "name": "pt3", 00:12:03.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.824 "is_configured": true, 00:12:03.824 "data_offset": 2048, 00:12:03.824 "data_size": 63488 00:12:03.824 }, 00:12:03.824 { 00:12:03.824 "name": "pt4", 00:12:03.824 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.824 "is_configured": true, 00:12:03.824 "data_offset": 2048, 00:12:03.824 "data_size": 63488 00:12:03.824 } 00:12:03.824 ] 00:12:03.824 } 00:12:03.824 } 00:12:03.824 }' 00:12:03.824 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.824 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.824 pt2 00:12:03.824 pt3 00:12:03.824 pt4' 00:12:03.824 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.082 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:04.340 [2024-11-04 11:44:29.616534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fe02b907-8d3f-4cfe-aefd-3d01d07a7d12 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fe02b907-8d3f-4cfe-aefd-3d01d07a7d12 ']' 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.340 [2024-11-04 11:44:29.664070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.340 [2024-11-04 11:44:29.664102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.340 [2024-11-04 11:44:29.664244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.340 [2024-11-04 11:44:29.664329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.340 [2024-11-04 11:44:29.664349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.340 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 [2024-11-04 11:44:29.831799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:04.341 [2024-11-04 11:44:29.833893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:04.341 [2024-11-04 11:44:29.834005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:04.341 [2024-11-04 11:44:29.834050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:04.341 [2024-11-04 11:44:29.834125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:04.341 [2024-11-04 11:44:29.834206] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:04.341 [2024-11-04 11:44:29.834232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:04.341 [2024-11-04 11:44:29.834258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:04.341 [2024-11-04 11:44:29.834276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.341 [2024-11-04 11:44:29.834291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:04.341 request: 00:12:04.341 { 00:12:04.341 "name": "raid_bdev1", 00:12:04.341 "raid_level": "concat", 00:12:04.341 "base_bdevs": [ 00:12:04.341 "malloc1", 00:12:04.341 "malloc2", 00:12:04.341 "malloc3", 00:12:04.341 "malloc4" 00:12:04.341 ], 00:12:04.341 "strip_size_kb": 64, 00:12:04.341 "superblock": false, 00:12:04.341 "method": "bdev_raid_create", 00:12:04.341 "req_id": 1 00:12:04.341 } 00:12:04.341 Got JSON-RPC error response 00:12:04.341 response: 00:12:04.341 { 00:12:04.341 "code": -17, 00:12:04.341 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:04.341 } 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.341 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.599 [2024-11-04 11:44:29.891663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.599 [2024-11-04 11:44:29.891801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.599 [2024-11-04 11:44:29.891859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:04.599 [2024-11-04 11:44:29.891912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.599 [2024-11-04 11:44:29.894392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.599 [2024-11-04 11:44:29.894502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.599 [2024-11-04 11:44:29.894659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.599 [2024-11-04 11:44:29.894785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.599 pt1 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.599 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.599 "name": "raid_bdev1", 00:12:04.599 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:04.599 "strip_size_kb": 64, 00:12:04.599 "state": "configuring", 00:12:04.600 "raid_level": "concat", 00:12:04.600 "superblock": true, 00:12:04.600 "num_base_bdevs": 4, 00:12:04.600 "num_base_bdevs_discovered": 1, 00:12:04.600 "num_base_bdevs_operational": 4, 00:12:04.600 "base_bdevs_list": [ 00:12:04.600 { 00:12:04.600 "name": "pt1", 00:12:04.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.600 "is_configured": true, 00:12:04.600 "data_offset": 2048, 00:12:04.600 "data_size": 63488 00:12:04.600 }, 00:12:04.600 { 00:12:04.600 "name": null, 00:12:04.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.600 "is_configured": false, 00:12:04.600 "data_offset": 2048, 00:12:04.600 "data_size": 63488 00:12:04.600 }, 00:12:04.600 { 00:12:04.600 "name": null, 00:12:04.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.600 "is_configured": false, 00:12:04.600 "data_offset": 2048, 00:12:04.600 "data_size": 63488 00:12:04.600 }, 00:12:04.600 { 00:12:04.600 "name": null, 00:12:04.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.600 "is_configured": false, 00:12:04.600 "data_offset": 2048, 00:12:04.600 "data_size": 63488 00:12:04.600 } 00:12:04.600 ] 00:12:04.600 }' 00:12:04.600 11:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.600 11:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.858 [2024-11-04 11:44:30.366878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.858 [2024-11-04 11:44:30.367035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.858 [2024-11-04 11:44:30.367102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:04.858 [2024-11-04 11:44:30.367154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.858 [2024-11-04 11:44:30.367758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.858 [2024-11-04 11:44:30.367857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.858 [2024-11-04 11:44:30.368009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.858 [2024-11-04 11:44:30.368090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.858 pt2 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.858 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.858 [2024-11-04 11:44:30.374870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.117 "name": "raid_bdev1", 00:12:05.117 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:05.117 "strip_size_kb": 64, 00:12:05.117 "state": "configuring", 00:12:05.117 "raid_level": "concat", 00:12:05.117 "superblock": true, 00:12:05.117 "num_base_bdevs": 4, 00:12:05.117 "num_base_bdevs_discovered": 1, 00:12:05.117 "num_base_bdevs_operational": 4, 00:12:05.117 "base_bdevs_list": [ 00:12:05.117 { 00:12:05.117 "name": "pt1", 00:12:05.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.117 "is_configured": true, 00:12:05.117 "data_offset": 2048, 00:12:05.117 "data_size": 63488 00:12:05.117 }, 00:12:05.117 { 00:12:05.117 "name": null, 00:12:05.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.117 "is_configured": false, 00:12:05.117 "data_offset": 0, 00:12:05.117 "data_size": 63488 00:12:05.117 }, 00:12:05.117 { 00:12:05.117 "name": null, 00:12:05.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.117 "is_configured": false, 00:12:05.117 "data_offset": 2048, 00:12:05.117 "data_size": 63488 00:12:05.117 }, 00:12:05.117 { 00:12:05.117 "name": null, 00:12:05.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.117 "is_configured": false, 00:12:05.117 "data_offset": 2048, 00:12:05.117 "data_size": 63488 00:12:05.117 } 00:12:05.117 ] 00:12:05.117 }' 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.117 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.376 [2024-11-04 11:44:30.878028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.376 [2024-11-04 11:44:30.878180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.376 [2024-11-04 11:44:30.878279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:05.376 [2024-11-04 11:44:30.878323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.376 [2024-11-04 11:44:30.878936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.376 [2024-11-04 11:44:30.879026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.376 [2024-11-04 11:44:30.879186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:05.376 [2024-11-04 11:44:30.879264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.376 pt2 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.376 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.376 [2024-11-04 11:44:30.889969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.376 [2024-11-04 11:44:30.890095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.376 [2024-11-04 11:44:30.890142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:05.376 [2024-11-04 11:44:30.890156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.376 [2024-11-04 11:44:30.890678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.376 [2024-11-04 11:44:30.890702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.376 [2024-11-04 11:44:30.890797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:05.376 [2024-11-04 11:44:30.890821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.635 pt3 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.635 [2024-11-04 11:44:30.901911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.635 [2024-11-04 11:44:30.901970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.635 [2024-11-04 11:44:30.901994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:05.635 [2024-11-04 11:44:30.902005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.635 [2024-11-04 11:44:30.902458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.635 [2024-11-04 11:44:30.902481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.635 [2024-11-04 11:44:30.902562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:05.635 [2024-11-04 11:44:30.902586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.635 [2024-11-04 11:44:30.902740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.635 [2024-11-04 11:44:30.902751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:05.635 [2024-11-04 11:44:30.903058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:05.635 [2024-11-04 11:44:30.903254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.635 [2024-11-04 11:44:30.903271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.635 [2024-11-04 11:44:30.903420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.635 pt4 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.635 "name": "raid_bdev1", 00:12:05.635 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:05.635 "strip_size_kb": 64, 00:12:05.635 "state": "online", 00:12:05.635 "raid_level": "concat", 00:12:05.635 "superblock": true, 00:12:05.635 "num_base_bdevs": 4, 00:12:05.635 "num_base_bdevs_discovered": 4, 00:12:05.635 "num_base_bdevs_operational": 4, 00:12:05.635 "base_bdevs_list": [ 00:12:05.635 { 00:12:05.635 "name": "pt1", 00:12:05.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.635 "is_configured": true, 00:12:05.635 "data_offset": 2048, 00:12:05.635 "data_size": 63488 00:12:05.635 }, 00:12:05.635 { 00:12:05.635 "name": "pt2", 00:12:05.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.635 "is_configured": true, 00:12:05.635 "data_offset": 2048, 00:12:05.635 "data_size": 63488 00:12:05.635 }, 00:12:05.635 { 00:12:05.635 "name": "pt3", 00:12:05.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.635 "is_configured": true, 00:12:05.635 "data_offset": 2048, 00:12:05.635 "data_size": 63488 00:12:05.635 }, 00:12:05.635 { 00:12:05.635 "name": "pt4", 00:12:05.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.635 "is_configured": true, 00:12:05.635 "data_offset": 2048, 00:12:05.635 "data_size": 63488 00:12:05.635 } 00:12:05.635 ] 00:12:05.635 }' 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.635 11:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.917 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.917 [2024-11-04 11:44:31.433522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.175 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.175 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.175 "name": "raid_bdev1", 00:12:06.175 "aliases": [ 00:12:06.175 "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12" 00:12:06.175 ], 00:12:06.175 "product_name": "Raid Volume", 00:12:06.175 "block_size": 512, 00:12:06.175 "num_blocks": 253952, 00:12:06.175 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:06.175 "assigned_rate_limits": { 00:12:06.175 "rw_ios_per_sec": 0, 00:12:06.175 "rw_mbytes_per_sec": 0, 00:12:06.175 "r_mbytes_per_sec": 0, 00:12:06.175 "w_mbytes_per_sec": 0 00:12:06.175 }, 00:12:06.175 "claimed": false, 00:12:06.175 "zoned": false, 00:12:06.175 "supported_io_types": { 00:12:06.175 "read": true, 00:12:06.175 "write": true, 00:12:06.175 "unmap": true, 00:12:06.175 "flush": true, 00:12:06.175 "reset": true, 00:12:06.175 "nvme_admin": false, 00:12:06.175 "nvme_io": false, 00:12:06.175 "nvme_io_md": false, 00:12:06.175 "write_zeroes": true, 00:12:06.175 "zcopy": false, 00:12:06.175 "get_zone_info": false, 00:12:06.175 "zone_management": false, 00:12:06.175 "zone_append": false, 00:12:06.175 "compare": false, 00:12:06.175 "compare_and_write": false, 00:12:06.175 "abort": false, 00:12:06.175 "seek_hole": false, 00:12:06.175 "seek_data": false, 00:12:06.175 "copy": false, 00:12:06.175 "nvme_iov_md": false 00:12:06.175 }, 00:12:06.175 "memory_domains": [ 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.175 "dma_device_type": 2 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "system", 00:12:06.175 "dma_device_type": 1 00:12:06.175 }, 00:12:06.175 { 00:12:06.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.176 "dma_device_type": 2 00:12:06.176 }, 00:12:06.176 { 00:12:06.176 "dma_device_id": "system", 00:12:06.176 "dma_device_type": 1 00:12:06.176 }, 00:12:06.176 { 00:12:06.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.176 "dma_device_type": 2 00:12:06.176 } 00:12:06.176 ], 00:12:06.176 "driver_specific": { 00:12:06.176 "raid": { 00:12:06.176 "uuid": "fe02b907-8d3f-4cfe-aefd-3d01d07a7d12", 00:12:06.176 "strip_size_kb": 64, 00:12:06.176 "state": "online", 00:12:06.176 "raid_level": "concat", 00:12:06.176 "superblock": true, 00:12:06.176 "num_base_bdevs": 4, 00:12:06.176 "num_base_bdevs_discovered": 4, 00:12:06.176 "num_base_bdevs_operational": 4, 00:12:06.176 "base_bdevs_list": [ 00:12:06.176 { 00:12:06.176 "name": "pt1", 00:12:06.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.176 "is_configured": true, 00:12:06.176 "data_offset": 2048, 00:12:06.176 "data_size": 63488 00:12:06.176 }, 00:12:06.176 { 00:12:06.176 "name": "pt2", 00:12:06.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.176 "is_configured": true, 00:12:06.176 "data_offset": 2048, 00:12:06.176 "data_size": 63488 00:12:06.176 }, 00:12:06.176 { 00:12:06.176 "name": "pt3", 00:12:06.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.176 "is_configured": true, 00:12:06.176 "data_offset": 2048, 00:12:06.176 "data_size": 63488 00:12:06.176 }, 00:12:06.176 { 00:12:06.176 "name": "pt4", 00:12:06.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.176 "is_configured": true, 00:12:06.176 "data_offset": 2048, 00:12:06.176 "data_size": 63488 00:12:06.176 } 00:12:06.176 ] 00:12:06.176 } 00:12:06.176 } 00:12:06.176 }' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.176 pt2 00:12:06.176 pt3 00:12:06.176 pt4' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.176 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.435 [2024-11-04 11:44:31.772938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fe02b907-8d3f-4cfe-aefd-3d01d07a7d12 '!=' fe02b907-8d3f-4cfe-aefd-3d01d07a7d12 ']' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72862 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72862 ']' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72862 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72862 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72862' 00:12:06.435 killing process with pid 72862 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72862 00:12:06.435 [2024-11-04 11:44:31.843693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.435 [2024-11-04 11:44:31.843805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.435 11:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72862 00:12:06.435 [2024-11-04 11:44:31.843895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.435 [2024-11-04 11:44:31.843908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:07.003 [2024-11-04 11:44:32.241715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.940 11:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:07.940 00:12:07.940 real 0m5.808s 00:12:07.940 user 0m8.365s 00:12:07.940 sys 0m0.990s 00:12:07.940 ************************************ 00:12:07.940 END TEST raid_superblock_test 00:12:07.940 ************************************ 00:12:07.940 11:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.940 11:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.940 11:44:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:07.940 11:44:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:07.940 11:44:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:07.940 11:44:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.940 ************************************ 00:12:07.940 START TEST raid_read_error_test 00:12:07.940 ************************************ 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:07.940 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k7DyZMNPl8 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73126 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73126 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73126 ']' 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.199 11:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.199 [2024-11-04 11:44:33.549520] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:08.199 [2024-11-04 11:44:33.549706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73126 ] 00:12:08.199 [2024-11-04 11:44:33.706382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.457 [2024-11-04 11:44:33.823686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.716 [2024-11-04 11:44:34.027165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.716 [2024-11-04 11:44:34.027293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 BaseBdev1_malloc 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 true 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.002 [2024-11-04 11:44:34.505759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:09.002 [2024-11-04 11:44:34.505822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.002 [2024-11-04 11:44:34.505846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:09.002 [2024-11-04 11:44:34.505859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.002 [2024-11-04 11:44:34.508351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.002 [2024-11-04 11:44:34.508486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.002 BaseBdev1 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.002 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 BaseBdev2_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 true 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 [2024-11-04 11:44:34.573551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:09.262 [2024-11-04 11:44:34.573618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.262 [2024-11-04 11:44:34.573636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:09.262 [2024-11-04 11:44:34.573648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.262 [2024-11-04 11:44:34.575946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.262 [2024-11-04 11:44:34.576063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.262 BaseBdev2 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 BaseBdev3_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 true 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 [2024-11-04 11:44:34.653713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:09.262 [2024-11-04 11:44:34.653773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.262 [2024-11-04 11:44:34.653794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:09.262 [2024-11-04 11:44:34.653805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.262 [2024-11-04 11:44:34.656099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.262 [2024-11-04 11:44:34.656164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:09.262 BaseBdev3 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 BaseBdev4_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 true 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 [2024-11-04 11:44:34.720748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:09.262 [2024-11-04 11:44:34.720819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.262 [2024-11-04 11:44:34.720843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:09.262 [2024-11-04 11:44:34.720857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.262 [2024-11-04 11:44:34.723632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.262 [2024-11-04 11:44:34.723735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:09.262 BaseBdev4 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 [2024-11-04 11:44:34.732790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.262 [2024-11-04 11:44:34.734878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.262 [2024-11-04 11:44:34.735042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.262 [2024-11-04 11:44:34.735134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.262 [2024-11-04 11:44:34.735457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:09.262 [2024-11-04 11:44:34.735478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:09.262 [2024-11-04 11:44:34.735790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:09.262 [2024-11-04 11:44:34.735992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:09.262 [2024-11-04 11:44:34.736006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:09.262 [2024-11-04 11:44:34.736216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.262 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.521 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.521 "name": "raid_bdev1", 00:12:09.521 "uuid": "952fddf5-be17-42fc-af9b-b357036d11ac", 00:12:09.521 "strip_size_kb": 64, 00:12:09.521 "state": "online", 00:12:09.521 "raid_level": "concat", 00:12:09.521 "superblock": true, 00:12:09.522 "num_base_bdevs": 4, 00:12:09.522 "num_base_bdevs_discovered": 4, 00:12:09.522 "num_base_bdevs_operational": 4, 00:12:09.522 "base_bdevs_list": [ 00:12:09.522 { 00:12:09.522 "name": "BaseBdev1", 00:12:09.522 "uuid": "70f1b204-7348-5bf7-b386-6adaa592af17", 00:12:09.522 "is_configured": true, 00:12:09.522 "data_offset": 2048, 00:12:09.522 "data_size": 63488 00:12:09.522 }, 00:12:09.522 { 00:12:09.522 "name": "BaseBdev2", 00:12:09.522 "uuid": "e2286db6-605e-5fc9-a6ee-8df327b24e2c", 00:12:09.522 "is_configured": true, 00:12:09.522 "data_offset": 2048, 00:12:09.522 "data_size": 63488 00:12:09.522 }, 00:12:09.522 { 00:12:09.522 "name": "BaseBdev3", 00:12:09.522 "uuid": "5dc6647e-21fd-5b9a-ba2d-fe01ea20f89c", 00:12:09.522 "is_configured": true, 00:12:09.522 "data_offset": 2048, 00:12:09.522 "data_size": 63488 00:12:09.522 }, 00:12:09.522 { 00:12:09.522 "name": "BaseBdev4", 00:12:09.522 "uuid": "7d28a244-5d87-51dd-a017-11b3763d04b5", 00:12:09.522 "is_configured": true, 00:12:09.522 "data_offset": 2048, 00:12:09.522 "data_size": 63488 00:12:09.522 } 00:12:09.522 ] 00:12:09.522 }' 00:12:09.522 11:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.522 11:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.781 11:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:09.781 11:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.781 [2024-11-04 11:44:35.209474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.718 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.718 "name": "raid_bdev1", 00:12:10.718 "uuid": "952fddf5-be17-42fc-af9b-b357036d11ac", 00:12:10.718 "strip_size_kb": 64, 00:12:10.718 "state": "online", 00:12:10.718 "raid_level": "concat", 00:12:10.718 "superblock": true, 00:12:10.718 "num_base_bdevs": 4, 00:12:10.718 "num_base_bdevs_discovered": 4, 00:12:10.718 "num_base_bdevs_operational": 4, 00:12:10.719 "base_bdevs_list": [ 00:12:10.719 { 00:12:10.719 "name": "BaseBdev1", 00:12:10.719 "uuid": "70f1b204-7348-5bf7-b386-6adaa592af17", 00:12:10.719 "is_configured": true, 00:12:10.719 "data_offset": 2048, 00:12:10.719 "data_size": 63488 00:12:10.719 }, 00:12:10.719 { 00:12:10.719 "name": "BaseBdev2", 00:12:10.719 "uuid": "e2286db6-605e-5fc9-a6ee-8df327b24e2c", 00:12:10.719 "is_configured": true, 00:12:10.719 "data_offset": 2048, 00:12:10.719 "data_size": 63488 00:12:10.719 }, 00:12:10.719 { 00:12:10.719 "name": "BaseBdev3", 00:12:10.719 "uuid": "5dc6647e-21fd-5b9a-ba2d-fe01ea20f89c", 00:12:10.719 "is_configured": true, 00:12:10.719 "data_offset": 2048, 00:12:10.719 "data_size": 63488 00:12:10.719 }, 00:12:10.719 { 00:12:10.719 "name": "BaseBdev4", 00:12:10.719 "uuid": "7d28a244-5d87-51dd-a017-11b3763d04b5", 00:12:10.719 "is_configured": true, 00:12:10.719 "data_offset": 2048, 00:12:10.719 "data_size": 63488 00:12:10.719 } 00:12:10.719 ] 00:12:10.719 }' 00:12:10.719 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.719 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.123 [2024-11-04 11:44:36.574029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.123 [2024-11-04 11:44:36.574124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.123 [2024-11-04 11:44:36.577065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.123 [2024-11-04 11:44:36.577122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.123 [2024-11-04 11:44:36.577164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.123 [2024-11-04 11:44:36.577179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:11.123 { 00:12:11.123 "results": [ 00:12:11.123 { 00:12:11.123 "job": "raid_bdev1", 00:12:11.123 "core_mask": "0x1", 00:12:11.123 "workload": "randrw", 00:12:11.123 "percentage": 50, 00:12:11.123 "status": "finished", 00:12:11.123 "queue_depth": 1, 00:12:11.123 "io_size": 131072, 00:12:11.123 "runtime": 1.365389, 00:12:11.123 "iops": 14330.714543620903, 00:12:11.123 "mibps": 1791.3393179526129, 00:12:11.123 "io_failed": 1, 00:12:11.123 "io_timeout": 0, 00:12:11.123 "avg_latency_us": 96.87205704349316, 00:12:11.123 "min_latency_us": 26.717903930131005, 00:12:11.123 "max_latency_us": 1438.071615720524 00:12:11.123 } 00:12:11.123 ], 00:12:11.123 "core_count": 1 00:12:11.123 } 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73126 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73126 ']' 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73126 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73126 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73126' 00:12:11.123 killing process with pid 73126 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73126 00:12:11.123 [2024-11-04 11:44:36.611907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.123 11:44:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73126 00:12:11.699 [2024-11-04 11:44:36.960592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k7DyZMNPl8 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:13.080 00:12:13.080 real 0m4.770s 00:12:13.080 user 0m5.615s 00:12:13.080 sys 0m0.560s 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.080 11:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.080 ************************************ 00:12:13.080 END TEST raid_read_error_test 00:12:13.080 ************************************ 00:12:13.080 11:44:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:13.080 11:44:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:13.080 11:44:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.080 11:44:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.080 ************************************ 00:12:13.080 START TEST raid_write_error_test 00:12:13.080 ************************************ 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:13.080 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AUAFvGeS6b 00:12:13.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73272 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73272 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73272 ']' 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 11:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:13.081 [2024-11-04 11:44:38.371475] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:13.081 [2024-11-04 11:44:38.371611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73272 ] 00:12:13.081 [2024-11-04 11:44:38.547538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.340 [2024-11-04 11:44:38.673831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.600 [2024-11-04 11:44:38.898499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.600 [2024-11-04 11:44:38.898638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 BaseBdev1_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 true 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 [2024-11-04 11:44:39.294508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.859 [2024-11-04 11:44:39.294640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.859 [2024-11-04 11:44:39.294674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.859 [2024-11-04 11:44:39.294706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.859 [2024-11-04 11:44:39.297471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.859 [2024-11-04 11:44:39.297527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.859 BaseBdev1 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 BaseBdev2_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 true 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 [2024-11-04 11:44:39.363168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.859 [2024-11-04 11:44:39.363232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.859 [2024-11-04 11:44:39.363252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.859 [2024-11-04 11:44:39.363263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.859 [2024-11-04 11:44:39.365656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.859 [2024-11-04 11:44:39.365700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.859 BaseBdev2 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.859 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 BaseBdev3_malloc 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 true 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 [2024-11-04 11:44:39.440590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:14.120 [2024-11-04 11:44:39.440648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.120 [2024-11-04 11:44:39.440669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:14.120 [2024-11-04 11:44:39.440680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.120 [2024-11-04 11:44:39.442999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.120 [2024-11-04 11:44:39.443042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:14.120 BaseBdev3 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 BaseBdev4_malloc 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 true 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 [2024-11-04 11:44:39.508761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:14.120 [2024-11-04 11:44:39.508826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.120 [2024-11-04 11:44:39.508848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:14.120 [2024-11-04 11:44:39.508859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.120 [2024-11-04 11:44:39.511544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.120 [2024-11-04 11:44:39.511648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:14.120 BaseBdev4 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 [2024-11-04 11:44:39.520812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.120 [2024-11-04 11:44:39.522857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.120 [2024-11-04 11:44:39.523008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.120 [2024-11-04 11:44:39.523110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.120 [2024-11-04 11:44:39.523380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:14.120 [2024-11-04 11:44:39.523413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:14.120 [2024-11-04 11:44:39.523704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:14.120 [2024-11-04 11:44:39.523893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:14.120 [2024-11-04 11:44:39.523906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:14.120 [2024-11-04 11:44:39.524082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.120 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.120 "name": "raid_bdev1", 00:12:14.120 "uuid": "4ae2fa78-dac3-4b69-8b49-a39c5de43f46", 00:12:14.120 "strip_size_kb": 64, 00:12:14.120 "state": "online", 00:12:14.120 "raid_level": "concat", 00:12:14.120 "superblock": true, 00:12:14.120 "num_base_bdevs": 4, 00:12:14.120 "num_base_bdevs_discovered": 4, 00:12:14.120 "num_base_bdevs_operational": 4, 00:12:14.120 "base_bdevs_list": [ 00:12:14.120 { 00:12:14.120 "name": "BaseBdev1", 00:12:14.120 "uuid": "08ace0f2-ac9e-5607-bdcb-b40dd91898a8", 00:12:14.120 "is_configured": true, 00:12:14.120 "data_offset": 2048, 00:12:14.120 "data_size": 63488 00:12:14.120 }, 00:12:14.120 { 00:12:14.120 "name": "BaseBdev2", 00:12:14.120 "uuid": "328df59b-0a64-57a9-a0e0-a829babcdd1f", 00:12:14.120 "is_configured": true, 00:12:14.120 "data_offset": 2048, 00:12:14.120 "data_size": 63488 00:12:14.120 }, 00:12:14.120 { 00:12:14.120 "name": "BaseBdev3", 00:12:14.120 "uuid": "a69a216b-629f-5d16-9c7f-eebb6419a0c7", 00:12:14.120 "is_configured": true, 00:12:14.120 "data_offset": 2048, 00:12:14.120 "data_size": 63488 00:12:14.120 }, 00:12:14.120 { 00:12:14.120 "name": "BaseBdev4", 00:12:14.121 "uuid": "d9325e62-18b6-559f-b9ab-6b7b401c7c8a", 00:12:14.121 "is_configured": true, 00:12:14.121 "data_offset": 2048, 00:12:14.121 "data_size": 63488 00:12:14.121 } 00:12:14.121 ] 00:12:14.121 }' 00:12:14.121 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.121 11:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.688 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:14.688 11:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.688 [2024-11-04 11:44:40.061575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.627 11:44:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.627 "name": "raid_bdev1", 00:12:15.627 "uuid": "4ae2fa78-dac3-4b69-8b49-a39c5de43f46", 00:12:15.627 "strip_size_kb": 64, 00:12:15.627 "state": "online", 00:12:15.627 "raid_level": "concat", 00:12:15.627 "superblock": true, 00:12:15.627 "num_base_bdevs": 4, 00:12:15.627 "num_base_bdevs_discovered": 4, 00:12:15.627 "num_base_bdevs_operational": 4, 00:12:15.627 "base_bdevs_list": [ 00:12:15.627 { 00:12:15.627 "name": "BaseBdev1", 00:12:15.627 "uuid": "08ace0f2-ac9e-5607-bdcb-b40dd91898a8", 00:12:15.627 "is_configured": true, 00:12:15.627 "data_offset": 2048, 00:12:15.627 "data_size": 63488 00:12:15.627 }, 00:12:15.627 { 00:12:15.627 "name": "BaseBdev2", 00:12:15.627 "uuid": "328df59b-0a64-57a9-a0e0-a829babcdd1f", 00:12:15.627 "is_configured": true, 00:12:15.627 "data_offset": 2048, 00:12:15.627 "data_size": 63488 00:12:15.627 }, 00:12:15.627 { 00:12:15.627 "name": "BaseBdev3", 00:12:15.627 "uuid": "a69a216b-629f-5d16-9c7f-eebb6419a0c7", 00:12:15.627 "is_configured": true, 00:12:15.627 "data_offset": 2048, 00:12:15.627 "data_size": 63488 00:12:15.627 }, 00:12:15.627 { 00:12:15.627 "name": "BaseBdev4", 00:12:15.627 "uuid": "d9325e62-18b6-559f-b9ab-6b7b401c7c8a", 00:12:15.627 "is_configured": true, 00:12:15.627 "data_offset": 2048, 00:12:15.627 "data_size": 63488 00:12:15.627 } 00:12:15.627 ] 00:12:15.627 }' 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.627 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.248 [2024-11-04 11:44:41.482213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.248 [2024-11-04 11:44:41.482253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.248 [2024-11-04 11:44:41.485535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.248 [2024-11-04 11:44:41.485642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.248 [2024-11-04 11:44:41.485722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.248 [2024-11-04 11:44:41.485798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:16.248 { 00:12:16.248 "results": [ 00:12:16.248 { 00:12:16.248 "job": "raid_bdev1", 00:12:16.248 "core_mask": "0x1", 00:12:16.248 "workload": "randrw", 00:12:16.248 "percentage": 50, 00:12:16.248 "status": "finished", 00:12:16.248 "queue_depth": 1, 00:12:16.248 "io_size": 131072, 00:12:16.248 "runtime": 1.421473, 00:12:16.248 "iops": 14189.506237543732, 00:12:16.248 "mibps": 1773.6882796929665, 00:12:16.248 "io_failed": 1, 00:12:16.248 "io_timeout": 0, 00:12:16.248 "avg_latency_us": 97.66445121287231, 00:12:16.248 "min_latency_us": 27.053275109170304, 00:12:16.248 "max_latency_us": 1631.2454148471616 00:12:16.248 } 00:12:16.248 ], 00:12:16.248 "core_count": 1 00:12:16.248 } 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73272 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73272 ']' 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73272 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73272 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:16.248 killing process with pid 73272 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73272' 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73272 00:12:16.248 [2024-11-04 11:44:41.531393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.248 11:44:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73272 00:12:16.506 [2024-11-04 11:44:41.866068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AUAFvGeS6b 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:17.880 ************************************ 00:12:17.880 END TEST raid_write_error_test 00:12:17.880 ************************************ 00:12:17.880 11:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:17.880 00:12:17.881 real 0m4.843s 00:12:17.881 user 0m5.751s 00:12:17.881 sys 0m0.562s 00:12:17.881 11:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.881 11:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.881 11:44:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:17.881 11:44:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:17.881 11:44:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:17.881 11:44:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.881 11:44:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.881 ************************************ 00:12:17.881 START TEST raid_state_function_test 00:12:17.881 ************************************ 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73416 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73416' 00:12:17.881 Process raid pid: 73416 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73416 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73416 ']' 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.881 11:44:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.881 [2024-11-04 11:44:43.280072] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:17.881 [2024-11-04 11:44:43.280290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.140 [2024-11-04 11:44:43.439063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.140 [2024-11-04 11:44:43.558157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.399 [2024-11-04 11:44:43.790706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.399 [2024-11-04 11:44:43.790835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.658 [2024-11-04 11:44:44.148511] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.658 [2024-11-04 11:44:44.148563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.658 [2024-11-04 11:44:44.148575] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:18.658 [2024-11-04 11:44:44.148587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:18.658 [2024-11-04 11:44:44.148594] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:18.658 [2024-11-04 11:44:44.148604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:18.658 [2024-11-04 11:44:44.148611] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:18.658 [2024-11-04 11:44:44.148621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:18.658 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.659 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.928 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.928 "name": "Existed_Raid", 00:12:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.928 "strip_size_kb": 0, 00:12:18.928 "state": "configuring", 00:12:18.928 "raid_level": "raid1", 00:12:18.928 "superblock": false, 00:12:18.928 "num_base_bdevs": 4, 00:12:18.928 "num_base_bdevs_discovered": 0, 00:12:18.928 "num_base_bdevs_operational": 4, 00:12:18.928 "base_bdevs_list": [ 00:12:18.928 { 00:12:18.928 "name": "BaseBdev1", 00:12:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.928 "is_configured": false, 00:12:18.928 "data_offset": 0, 00:12:18.928 "data_size": 0 00:12:18.928 }, 00:12:18.928 { 00:12:18.928 "name": "BaseBdev2", 00:12:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.928 "is_configured": false, 00:12:18.928 "data_offset": 0, 00:12:18.928 "data_size": 0 00:12:18.928 }, 00:12:18.928 { 00:12:18.928 "name": "BaseBdev3", 00:12:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.928 "is_configured": false, 00:12:18.928 "data_offset": 0, 00:12:18.928 "data_size": 0 00:12:18.928 }, 00:12:18.928 { 00:12:18.928 "name": "BaseBdev4", 00:12:18.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.928 "is_configured": false, 00:12:18.928 "data_offset": 0, 00:12:18.928 "data_size": 0 00:12:18.928 } 00:12:18.928 ] 00:12:18.928 }' 00:12:18.928 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.928 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.187 [2024-11-04 11:44:44.639623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.187 [2024-11-04 11:44:44.639737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.187 [2024-11-04 11:44:44.647591] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.187 [2024-11-04 11:44:44.647634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.187 [2024-11-04 11:44:44.647645] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.187 [2024-11-04 11:44:44.647654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.187 [2024-11-04 11:44:44.647660] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.187 [2024-11-04 11:44:44.647669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.187 [2024-11-04 11:44:44.647675] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.187 [2024-11-04 11:44:44.647684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.187 [2024-11-04 11:44:44.692702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.187 BaseBdev1 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.187 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 [ 00:12:19.446 { 00:12:19.446 "name": "BaseBdev1", 00:12:19.446 "aliases": [ 00:12:19.446 "c4476dd0-9237-44cb-a813-98e6e71a1ee3" 00:12:19.446 ], 00:12:19.446 "product_name": "Malloc disk", 00:12:19.446 "block_size": 512, 00:12:19.446 "num_blocks": 65536, 00:12:19.446 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:19.446 "assigned_rate_limits": { 00:12:19.446 "rw_ios_per_sec": 0, 00:12:19.446 "rw_mbytes_per_sec": 0, 00:12:19.446 "r_mbytes_per_sec": 0, 00:12:19.446 "w_mbytes_per_sec": 0 00:12:19.446 }, 00:12:19.446 "claimed": true, 00:12:19.446 "claim_type": "exclusive_write", 00:12:19.446 "zoned": false, 00:12:19.446 "supported_io_types": { 00:12:19.446 "read": true, 00:12:19.446 "write": true, 00:12:19.446 "unmap": true, 00:12:19.446 "flush": true, 00:12:19.446 "reset": true, 00:12:19.446 "nvme_admin": false, 00:12:19.446 "nvme_io": false, 00:12:19.446 "nvme_io_md": false, 00:12:19.446 "write_zeroes": true, 00:12:19.446 "zcopy": true, 00:12:19.446 "get_zone_info": false, 00:12:19.446 "zone_management": false, 00:12:19.446 "zone_append": false, 00:12:19.446 "compare": false, 00:12:19.446 "compare_and_write": false, 00:12:19.446 "abort": true, 00:12:19.446 "seek_hole": false, 00:12:19.446 "seek_data": false, 00:12:19.446 "copy": true, 00:12:19.446 "nvme_iov_md": false 00:12:19.446 }, 00:12:19.446 "memory_domains": [ 00:12:19.446 { 00:12:19.446 "dma_device_id": "system", 00:12:19.446 "dma_device_type": 1 00:12:19.446 }, 00:12:19.446 { 00:12:19.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.446 "dma_device_type": 2 00:12:19.446 } 00:12:19.446 ], 00:12:19.446 "driver_specific": {} 00:12:19.446 } 00:12:19.446 ] 00:12:19.446 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:19.446 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.446 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.446 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.447 "name": "Existed_Raid", 00:12:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.447 "strip_size_kb": 0, 00:12:19.447 "state": "configuring", 00:12:19.447 "raid_level": "raid1", 00:12:19.447 "superblock": false, 00:12:19.447 "num_base_bdevs": 4, 00:12:19.447 "num_base_bdevs_discovered": 1, 00:12:19.447 "num_base_bdevs_operational": 4, 00:12:19.447 "base_bdevs_list": [ 00:12:19.447 { 00:12:19.447 "name": "BaseBdev1", 00:12:19.447 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:19.447 "is_configured": true, 00:12:19.447 "data_offset": 0, 00:12:19.447 "data_size": 65536 00:12:19.447 }, 00:12:19.447 { 00:12:19.447 "name": "BaseBdev2", 00:12:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.447 "is_configured": false, 00:12:19.447 "data_offset": 0, 00:12:19.447 "data_size": 0 00:12:19.447 }, 00:12:19.447 { 00:12:19.447 "name": "BaseBdev3", 00:12:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.447 "is_configured": false, 00:12:19.447 "data_offset": 0, 00:12:19.447 "data_size": 0 00:12:19.447 }, 00:12:19.447 { 00:12:19.447 "name": "BaseBdev4", 00:12:19.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.447 "is_configured": false, 00:12:19.447 "data_offset": 0, 00:12:19.447 "data_size": 0 00:12:19.447 } 00:12:19.447 ] 00:12:19.447 }' 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.447 11:44:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.707 [2024-11-04 11:44:45.188007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.707 [2024-11-04 11:44:45.188068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.707 [2024-11-04 11:44:45.200054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.707 [2024-11-04 11:44:45.202158] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.707 [2024-11-04 11:44:45.202226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.707 [2024-11-04 11:44:45.202239] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:19.707 [2024-11-04 11:44:45.202252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:19.707 [2024-11-04 11:44:45.202260] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:19.707 [2024-11-04 11:44:45.202270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.707 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.967 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.967 "name": "Existed_Raid", 00:12:19.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.967 "strip_size_kb": 0, 00:12:19.967 "state": "configuring", 00:12:19.967 "raid_level": "raid1", 00:12:19.967 "superblock": false, 00:12:19.967 "num_base_bdevs": 4, 00:12:19.967 "num_base_bdevs_discovered": 1, 00:12:19.967 "num_base_bdevs_operational": 4, 00:12:19.967 "base_bdevs_list": [ 00:12:19.967 { 00:12:19.967 "name": "BaseBdev1", 00:12:19.967 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:19.967 "is_configured": true, 00:12:19.967 "data_offset": 0, 00:12:19.967 "data_size": 65536 00:12:19.967 }, 00:12:19.967 { 00:12:19.967 "name": "BaseBdev2", 00:12:19.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.967 "is_configured": false, 00:12:19.967 "data_offset": 0, 00:12:19.967 "data_size": 0 00:12:19.967 }, 00:12:19.967 { 00:12:19.967 "name": "BaseBdev3", 00:12:19.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.967 "is_configured": false, 00:12:19.967 "data_offset": 0, 00:12:19.967 "data_size": 0 00:12:19.967 }, 00:12:19.967 { 00:12:19.967 "name": "BaseBdev4", 00:12:19.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.967 "is_configured": false, 00:12:19.967 "data_offset": 0, 00:12:19.967 "data_size": 0 00:12:19.967 } 00:12:19.967 ] 00:12:19.967 }' 00:12:19.967 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.967 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 [2024-11-04 11:44:45.655734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.227 BaseBdev2 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 [ 00:12:20.227 { 00:12:20.227 "name": "BaseBdev2", 00:12:20.227 "aliases": [ 00:12:20.227 "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7" 00:12:20.227 ], 00:12:20.227 "product_name": "Malloc disk", 00:12:20.227 "block_size": 512, 00:12:20.227 "num_blocks": 65536, 00:12:20.227 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:20.227 "assigned_rate_limits": { 00:12:20.227 "rw_ios_per_sec": 0, 00:12:20.227 "rw_mbytes_per_sec": 0, 00:12:20.227 "r_mbytes_per_sec": 0, 00:12:20.227 "w_mbytes_per_sec": 0 00:12:20.227 }, 00:12:20.227 "claimed": true, 00:12:20.227 "claim_type": "exclusive_write", 00:12:20.227 "zoned": false, 00:12:20.227 "supported_io_types": { 00:12:20.227 "read": true, 00:12:20.227 "write": true, 00:12:20.227 "unmap": true, 00:12:20.227 "flush": true, 00:12:20.227 "reset": true, 00:12:20.227 "nvme_admin": false, 00:12:20.227 "nvme_io": false, 00:12:20.227 "nvme_io_md": false, 00:12:20.227 "write_zeroes": true, 00:12:20.227 "zcopy": true, 00:12:20.227 "get_zone_info": false, 00:12:20.227 "zone_management": false, 00:12:20.227 "zone_append": false, 00:12:20.227 "compare": false, 00:12:20.227 "compare_and_write": false, 00:12:20.227 "abort": true, 00:12:20.227 "seek_hole": false, 00:12:20.227 "seek_data": false, 00:12:20.227 "copy": true, 00:12:20.227 "nvme_iov_md": false 00:12:20.227 }, 00:12:20.227 "memory_domains": [ 00:12:20.227 { 00:12:20.227 "dma_device_id": "system", 00:12:20.227 "dma_device_type": 1 00:12:20.227 }, 00:12:20.227 { 00:12:20.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.227 "dma_device_type": 2 00:12:20.227 } 00:12:20.227 ], 00:12:20.227 "driver_specific": {} 00:12:20.227 } 00:12:20.227 ] 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.227 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.487 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.487 "name": "Existed_Raid", 00:12:20.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.487 "strip_size_kb": 0, 00:12:20.487 "state": "configuring", 00:12:20.487 "raid_level": "raid1", 00:12:20.487 "superblock": false, 00:12:20.487 "num_base_bdevs": 4, 00:12:20.487 "num_base_bdevs_discovered": 2, 00:12:20.488 "num_base_bdevs_operational": 4, 00:12:20.488 "base_bdevs_list": [ 00:12:20.488 { 00:12:20.488 "name": "BaseBdev1", 00:12:20.488 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:20.488 "is_configured": true, 00:12:20.488 "data_offset": 0, 00:12:20.488 "data_size": 65536 00:12:20.488 }, 00:12:20.488 { 00:12:20.488 "name": "BaseBdev2", 00:12:20.488 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:20.488 "is_configured": true, 00:12:20.488 "data_offset": 0, 00:12:20.488 "data_size": 65536 00:12:20.488 }, 00:12:20.488 { 00:12:20.488 "name": "BaseBdev3", 00:12:20.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.488 "is_configured": false, 00:12:20.488 "data_offset": 0, 00:12:20.488 "data_size": 0 00:12:20.488 }, 00:12:20.488 { 00:12:20.488 "name": "BaseBdev4", 00:12:20.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.488 "is_configured": false, 00:12:20.488 "data_offset": 0, 00:12:20.488 "data_size": 0 00:12:20.488 } 00:12:20.488 ] 00:12:20.488 }' 00:12:20.488 11:44:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.488 11:44:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.747 [2024-11-04 11:44:46.184895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.747 BaseBdev3 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:20.747 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.748 [ 00:12:20.748 { 00:12:20.748 "name": "BaseBdev3", 00:12:20.748 "aliases": [ 00:12:20.748 "10428919-aa3b-4ac0-a088-baa8a892562c" 00:12:20.748 ], 00:12:20.748 "product_name": "Malloc disk", 00:12:20.748 "block_size": 512, 00:12:20.748 "num_blocks": 65536, 00:12:20.748 "uuid": "10428919-aa3b-4ac0-a088-baa8a892562c", 00:12:20.748 "assigned_rate_limits": { 00:12:20.748 "rw_ios_per_sec": 0, 00:12:20.748 "rw_mbytes_per_sec": 0, 00:12:20.748 "r_mbytes_per_sec": 0, 00:12:20.748 "w_mbytes_per_sec": 0 00:12:20.748 }, 00:12:20.748 "claimed": true, 00:12:20.748 "claim_type": "exclusive_write", 00:12:20.748 "zoned": false, 00:12:20.748 "supported_io_types": { 00:12:20.748 "read": true, 00:12:20.748 "write": true, 00:12:20.748 "unmap": true, 00:12:20.748 "flush": true, 00:12:20.748 "reset": true, 00:12:20.748 "nvme_admin": false, 00:12:20.748 "nvme_io": false, 00:12:20.748 "nvme_io_md": false, 00:12:20.748 "write_zeroes": true, 00:12:20.748 "zcopy": true, 00:12:20.748 "get_zone_info": false, 00:12:20.748 "zone_management": false, 00:12:20.748 "zone_append": false, 00:12:20.748 "compare": false, 00:12:20.748 "compare_and_write": false, 00:12:20.748 "abort": true, 00:12:20.748 "seek_hole": false, 00:12:20.748 "seek_data": false, 00:12:20.748 "copy": true, 00:12:20.748 "nvme_iov_md": false 00:12:20.748 }, 00:12:20.748 "memory_domains": [ 00:12:20.748 { 00:12:20.748 "dma_device_id": "system", 00:12:20.748 "dma_device_type": 1 00:12:20.748 }, 00:12:20.748 { 00:12:20.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.748 "dma_device_type": 2 00:12:20.748 } 00:12:20.748 ], 00:12:20.748 "driver_specific": {} 00:12:20.748 } 00:12:20.748 ] 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.748 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.007 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.007 "name": "Existed_Raid", 00:12:21.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.007 "strip_size_kb": 0, 00:12:21.007 "state": "configuring", 00:12:21.007 "raid_level": "raid1", 00:12:21.007 "superblock": false, 00:12:21.007 "num_base_bdevs": 4, 00:12:21.007 "num_base_bdevs_discovered": 3, 00:12:21.007 "num_base_bdevs_operational": 4, 00:12:21.007 "base_bdevs_list": [ 00:12:21.007 { 00:12:21.007 "name": "BaseBdev1", 00:12:21.007 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:21.007 "is_configured": true, 00:12:21.007 "data_offset": 0, 00:12:21.007 "data_size": 65536 00:12:21.007 }, 00:12:21.007 { 00:12:21.007 "name": "BaseBdev2", 00:12:21.007 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:21.007 "is_configured": true, 00:12:21.007 "data_offset": 0, 00:12:21.007 "data_size": 65536 00:12:21.007 }, 00:12:21.007 { 00:12:21.007 "name": "BaseBdev3", 00:12:21.007 "uuid": "10428919-aa3b-4ac0-a088-baa8a892562c", 00:12:21.007 "is_configured": true, 00:12:21.007 "data_offset": 0, 00:12:21.007 "data_size": 65536 00:12:21.007 }, 00:12:21.007 { 00:12:21.007 "name": "BaseBdev4", 00:12:21.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.008 "is_configured": false, 00:12:21.008 "data_offset": 0, 00:12:21.008 "data_size": 0 00:12:21.008 } 00:12:21.008 ] 00:12:21.008 }' 00:12:21.008 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.008 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 [2024-11-04 11:44:46.752880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:21.270 [2024-11-04 11:44:46.753018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:21.270 [2024-11-04 11:44:46.753044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:21.270 [2024-11-04 11:44:46.753382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:21.270 [2024-11-04 11:44:46.753642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:21.270 [2024-11-04 11:44:46.753693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:21.270 [2024-11-04 11:44:46.754091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.270 BaseBdev4 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.270 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.270 [ 00:12:21.270 { 00:12:21.270 "name": "BaseBdev4", 00:12:21.270 "aliases": [ 00:12:21.270 "a396c0cb-93b3-41e9-83cf-15c2bec2c1ce" 00:12:21.270 ], 00:12:21.270 "product_name": "Malloc disk", 00:12:21.270 "block_size": 512, 00:12:21.270 "num_blocks": 65536, 00:12:21.270 "uuid": "a396c0cb-93b3-41e9-83cf-15c2bec2c1ce", 00:12:21.270 "assigned_rate_limits": { 00:12:21.270 "rw_ios_per_sec": 0, 00:12:21.270 "rw_mbytes_per_sec": 0, 00:12:21.270 "r_mbytes_per_sec": 0, 00:12:21.270 "w_mbytes_per_sec": 0 00:12:21.270 }, 00:12:21.270 "claimed": true, 00:12:21.270 "claim_type": "exclusive_write", 00:12:21.270 "zoned": false, 00:12:21.270 "supported_io_types": { 00:12:21.270 "read": true, 00:12:21.270 "write": true, 00:12:21.270 "unmap": true, 00:12:21.270 "flush": true, 00:12:21.270 "reset": true, 00:12:21.270 "nvme_admin": false, 00:12:21.270 "nvme_io": false, 00:12:21.270 "nvme_io_md": false, 00:12:21.270 "write_zeroes": true, 00:12:21.270 "zcopy": true, 00:12:21.270 "get_zone_info": false, 00:12:21.270 "zone_management": false, 00:12:21.270 "zone_append": false, 00:12:21.270 "compare": false, 00:12:21.270 "compare_and_write": false, 00:12:21.270 "abort": true, 00:12:21.270 "seek_hole": false, 00:12:21.270 "seek_data": false, 00:12:21.270 "copy": true, 00:12:21.530 "nvme_iov_md": false 00:12:21.530 }, 00:12:21.530 "memory_domains": [ 00:12:21.530 { 00:12:21.530 "dma_device_id": "system", 00:12:21.530 "dma_device_type": 1 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.530 "dma_device_type": 2 00:12:21.530 } 00:12:21.530 ], 00:12:21.530 "driver_specific": {} 00:12:21.530 } 00:12:21.530 ] 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.530 "name": "Existed_Raid", 00:12:21.530 "uuid": "592caf17-ece4-4e10-bc1f-d1459507a8be", 00:12:21.530 "strip_size_kb": 0, 00:12:21.530 "state": "online", 00:12:21.530 "raid_level": "raid1", 00:12:21.530 "superblock": false, 00:12:21.530 "num_base_bdevs": 4, 00:12:21.530 "num_base_bdevs_discovered": 4, 00:12:21.530 "num_base_bdevs_operational": 4, 00:12:21.530 "base_bdevs_list": [ 00:12:21.530 { 00:12:21.530 "name": "BaseBdev1", 00:12:21.530 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 0, 00:12:21.530 "data_size": 65536 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": "BaseBdev2", 00:12:21.530 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 0, 00:12:21.530 "data_size": 65536 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": "BaseBdev3", 00:12:21.530 "uuid": "10428919-aa3b-4ac0-a088-baa8a892562c", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 0, 00:12:21.530 "data_size": 65536 00:12:21.530 }, 00:12:21.530 { 00:12:21.530 "name": "BaseBdev4", 00:12:21.530 "uuid": "a396c0cb-93b3-41e9-83cf-15c2bec2c1ce", 00:12:21.530 "is_configured": true, 00:12:21.530 "data_offset": 0, 00:12:21.530 "data_size": 65536 00:12:21.530 } 00:12:21.530 ] 00:12:21.530 }' 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.530 11:44:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.790 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.790 [2024-11-04 11:44:47.296433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.052 "name": "Existed_Raid", 00:12:22.052 "aliases": [ 00:12:22.052 "592caf17-ece4-4e10-bc1f-d1459507a8be" 00:12:22.052 ], 00:12:22.052 "product_name": "Raid Volume", 00:12:22.052 "block_size": 512, 00:12:22.052 "num_blocks": 65536, 00:12:22.052 "uuid": "592caf17-ece4-4e10-bc1f-d1459507a8be", 00:12:22.052 "assigned_rate_limits": { 00:12:22.052 "rw_ios_per_sec": 0, 00:12:22.052 "rw_mbytes_per_sec": 0, 00:12:22.052 "r_mbytes_per_sec": 0, 00:12:22.052 "w_mbytes_per_sec": 0 00:12:22.052 }, 00:12:22.052 "claimed": false, 00:12:22.052 "zoned": false, 00:12:22.052 "supported_io_types": { 00:12:22.052 "read": true, 00:12:22.052 "write": true, 00:12:22.052 "unmap": false, 00:12:22.052 "flush": false, 00:12:22.052 "reset": true, 00:12:22.052 "nvme_admin": false, 00:12:22.052 "nvme_io": false, 00:12:22.052 "nvme_io_md": false, 00:12:22.052 "write_zeroes": true, 00:12:22.052 "zcopy": false, 00:12:22.052 "get_zone_info": false, 00:12:22.052 "zone_management": false, 00:12:22.052 "zone_append": false, 00:12:22.052 "compare": false, 00:12:22.052 "compare_and_write": false, 00:12:22.052 "abort": false, 00:12:22.052 "seek_hole": false, 00:12:22.052 "seek_data": false, 00:12:22.052 "copy": false, 00:12:22.052 "nvme_iov_md": false 00:12:22.052 }, 00:12:22.052 "memory_domains": [ 00:12:22.052 { 00:12:22.052 "dma_device_id": "system", 00:12:22.052 "dma_device_type": 1 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.052 "dma_device_type": 2 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "system", 00:12:22.052 "dma_device_type": 1 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.052 "dma_device_type": 2 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "system", 00:12:22.052 "dma_device_type": 1 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.052 "dma_device_type": 2 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "system", 00:12:22.052 "dma_device_type": 1 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.052 "dma_device_type": 2 00:12:22.052 } 00:12:22.052 ], 00:12:22.052 "driver_specific": { 00:12:22.052 "raid": { 00:12:22.052 "uuid": "592caf17-ece4-4e10-bc1f-d1459507a8be", 00:12:22.052 "strip_size_kb": 0, 00:12:22.052 "state": "online", 00:12:22.052 "raid_level": "raid1", 00:12:22.052 "superblock": false, 00:12:22.052 "num_base_bdevs": 4, 00:12:22.052 "num_base_bdevs_discovered": 4, 00:12:22.052 "num_base_bdevs_operational": 4, 00:12:22.052 "base_bdevs_list": [ 00:12:22.052 { 00:12:22.052 "name": "BaseBdev1", 00:12:22.052 "uuid": "c4476dd0-9237-44cb-a813-98e6e71a1ee3", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": "BaseBdev2", 00:12:22.052 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": "BaseBdev3", 00:12:22.052 "uuid": "10428919-aa3b-4ac0-a088-baa8a892562c", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 }, 00:12:22.052 { 00:12:22.052 "name": "BaseBdev4", 00:12:22.052 "uuid": "a396c0cb-93b3-41e9-83cf-15c2bec2c1ce", 00:12:22.052 "is_configured": true, 00:12:22.052 "data_offset": 0, 00:12:22.052 "data_size": 65536 00:12:22.052 } 00:12:22.052 ] 00:12:22.052 } 00:12:22.052 } 00:12:22.052 }' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:22.052 BaseBdev2 00:12:22.052 BaseBdev3 00:12:22.052 BaseBdev4' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.052 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.053 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.313 [2024-11-04 11:44:47.583644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.313 "name": "Existed_Raid", 00:12:22.313 "uuid": "592caf17-ece4-4e10-bc1f-d1459507a8be", 00:12:22.313 "strip_size_kb": 0, 00:12:22.313 "state": "online", 00:12:22.313 "raid_level": "raid1", 00:12:22.313 "superblock": false, 00:12:22.313 "num_base_bdevs": 4, 00:12:22.313 "num_base_bdevs_discovered": 3, 00:12:22.313 "num_base_bdevs_operational": 3, 00:12:22.313 "base_bdevs_list": [ 00:12:22.313 { 00:12:22.313 "name": null, 00:12:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.313 "is_configured": false, 00:12:22.313 "data_offset": 0, 00:12:22.313 "data_size": 65536 00:12:22.313 }, 00:12:22.313 { 00:12:22.313 "name": "BaseBdev2", 00:12:22.313 "uuid": "e43188dc-9fe0-49a3-a0ab-1eb73e0c11f7", 00:12:22.313 "is_configured": true, 00:12:22.313 "data_offset": 0, 00:12:22.313 "data_size": 65536 00:12:22.313 }, 00:12:22.313 { 00:12:22.313 "name": "BaseBdev3", 00:12:22.313 "uuid": "10428919-aa3b-4ac0-a088-baa8a892562c", 00:12:22.313 "is_configured": true, 00:12:22.313 "data_offset": 0, 00:12:22.313 "data_size": 65536 00:12:22.313 }, 00:12:22.313 { 00:12:22.313 "name": "BaseBdev4", 00:12:22.313 "uuid": "a396c0cb-93b3-41e9-83cf-15c2bec2c1ce", 00:12:22.313 "is_configured": true, 00:12:22.313 "data_offset": 0, 00:12:22.313 "data_size": 65536 00:12:22.313 } 00:12:22.313 ] 00:12:22.313 }' 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.313 11:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 [2024-11-04 11:44:48.173575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.881 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.881 [2024-11-04 11:44:48.337888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.141 [2024-11-04 11:44:48.495875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:23.141 [2024-11-04 11:44:48.495965] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.141 [2024-11-04 11:44:48.597034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.141 [2024-11-04 11:44:48.597189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.141 [2024-11-04 11:44:48.597242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:23.141 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.142 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 BaseBdev2 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.402 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.402 [ 00:12:23.402 { 00:12:23.402 "name": "BaseBdev2", 00:12:23.402 "aliases": [ 00:12:23.402 "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1" 00:12:23.402 ], 00:12:23.402 "product_name": "Malloc disk", 00:12:23.402 "block_size": 512, 00:12:23.402 "num_blocks": 65536, 00:12:23.402 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:23.402 "assigned_rate_limits": { 00:12:23.402 "rw_ios_per_sec": 0, 00:12:23.402 "rw_mbytes_per_sec": 0, 00:12:23.402 "r_mbytes_per_sec": 0, 00:12:23.402 "w_mbytes_per_sec": 0 00:12:23.402 }, 00:12:23.402 "claimed": false, 00:12:23.402 "zoned": false, 00:12:23.402 "supported_io_types": { 00:12:23.402 "read": true, 00:12:23.402 "write": true, 00:12:23.402 "unmap": true, 00:12:23.402 "flush": true, 00:12:23.402 "reset": true, 00:12:23.402 "nvme_admin": false, 00:12:23.402 "nvme_io": false, 00:12:23.402 "nvme_io_md": false, 00:12:23.402 "write_zeroes": true, 00:12:23.402 "zcopy": true, 00:12:23.402 "get_zone_info": false, 00:12:23.402 "zone_management": false, 00:12:23.402 "zone_append": false, 00:12:23.402 "compare": false, 00:12:23.402 "compare_and_write": false, 00:12:23.402 "abort": true, 00:12:23.402 "seek_hole": false, 00:12:23.402 "seek_data": false, 00:12:23.402 "copy": true, 00:12:23.402 "nvme_iov_md": false 00:12:23.402 }, 00:12:23.402 "memory_domains": [ 00:12:23.402 { 00:12:23.403 "dma_device_id": "system", 00:12:23.403 "dma_device_type": 1 00:12:23.403 }, 00:12:23.403 { 00:12:23.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.403 "dma_device_type": 2 00:12:23.403 } 00:12:23.403 ], 00:12:23.403 "driver_specific": {} 00:12:23.403 } 00:12:23.403 ] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 BaseBdev3 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 [ 00:12:23.403 { 00:12:23.403 "name": "BaseBdev3", 00:12:23.403 "aliases": [ 00:12:23.403 "718c481b-7ee0-470a-be57-4fdf71067762" 00:12:23.403 ], 00:12:23.403 "product_name": "Malloc disk", 00:12:23.403 "block_size": 512, 00:12:23.403 "num_blocks": 65536, 00:12:23.403 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:23.403 "assigned_rate_limits": { 00:12:23.403 "rw_ios_per_sec": 0, 00:12:23.403 "rw_mbytes_per_sec": 0, 00:12:23.403 "r_mbytes_per_sec": 0, 00:12:23.403 "w_mbytes_per_sec": 0 00:12:23.403 }, 00:12:23.403 "claimed": false, 00:12:23.403 "zoned": false, 00:12:23.403 "supported_io_types": { 00:12:23.403 "read": true, 00:12:23.403 "write": true, 00:12:23.403 "unmap": true, 00:12:23.403 "flush": true, 00:12:23.403 "reset": true, 00:12:23.403 "nvme_admin": false, 00:12:23.403 "nvme_io": false, 00:12:23.403 "nvme_io_md": false, 00:12:23.403 "write_zeroes": true, 00:12:23.403 "zcopy": true, 00:12:23.403 "get_zone_info": false, 00:12:23.403 "zone_management": false, 00:12:23.403 "zone_append": false, 00:12:23.403 "compare": false, 00:12:23.403 "compare_and_write": false, 00:12:23.403 "abort": true, 00:12:23.403 "seek_hole": false, 00:12:23.403 "seek_data": false, 00:12:23.403 "copy": true, 00:12:23.403 "nvme_iov_md": false 00:12:23.403 }, 00:12:23.403 "memory_domains": [ 00:12:23.403 { 00:12:23.403 "dma_device_id": "system", 00:12:23.403 "dma_device_type": 1 00:12:23.403 }, 00:12:23.403 { 00:12:23.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.403 "dma_device_type": 2 00:12:23.403 } 00:12:23.403 ], 00:12:23.403 "driver_specific": {} 00:12:23.403 } 00:12:23.403 ] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 BaseBdev4 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 [ 00:12:23.403 { 00:12:23.403 "name": "BaseBdev4", 00:12:23.403 "aliases": [ 00:12:23.403 "cf7d8127-3843-49cf-aaa5-a591d7d6c434" 00:12:23.403 ], 00:12:23.403 "product_name": "Malloc disk", 00:12:23.403 "block_size": 512, 00:12:23.403 "num_blocks": 65536, 00:12:23.403 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:23.403 "assigned_rate_limits": { 00:12:23.403 "rw_ios_per_sec": 0, 00:12:23.403 "rw_mbytes_per_sec": 0, 00:12:23.403 "r_mbytes_per_sec": 0, 00:12:23.403 "w_mbytes_per_sec": 0 00:12:23.403 }, 00:12:23.403 "claimed": false, 00:12:23.403 "zoned": false, 00:12:23.403 "supported_io_types": { 00:12:23.403 "read": true, 00:12:23.403 "write": true, 00:12:23.403 "unmap": true, 00:12:23.403 "flush": true, 00:12:23.403 "reset": true, 00:12:23.403 "nvme_admin": false, 00:12:23.403 "nvme_io": false, 00:12:23.403 "nvme_io_md": false, 00:12:23.403 "write_zeroes": true, 00:12:23.403 "zcopy": true, 00:12:23.403 "get_zone_info": false, 00:12:23.403 "zone_management": false, 00:12:23.403 "zone_append": false, 00:12:23.403 "compare": false, 00:12:23.403 "compare_and_write": false, 00:12:23.403 "abort": true, 00:12:23.403 "seek_hole": false, 00:12:23.403 "seek_data": false, 00:12:23.403 "copy": true, 00:12:23.403 "nvme_iov_md": false 00:12:23.403 }, 00:12:23.403 "memory_domains": [ 00:12:23.403 { 00:12:23.403 "dma_device_id": "system", 00:12:23.403 "dma_device_type": 1 00:12:23.403 }, 00:12:23.403 { 00:12:23.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.403 "dma_device_type": 2 00:12:23.403 } 00:12:23.403 ], 00:12:23.403 "driver_specific": {} 00:12:23.403 } 00:12:23.403 ] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.403 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.403 [2024-11-04 11:44:48.858363] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.403 [2024-11-04 11:44:48.858497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.404 [2024-11-04 11:44:48.858553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.404 [2024-11-04 11:44:48.861718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.404 [2024-11-04 11:44:48.861813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.404 "name": "Existed_Raid", 00:12:23.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.404 "strip_size_kb": 0, 00:12:23.404 "state": "configuring", 00:12:23.404 "raid_level": "raid1", 00:12:23.404 "superblock": false, 00:12:23.404 "num_base_bdevs": 4, 00:12:23.404 "num_base_bdevs_discovered": 3, 00:12:23.404 "num_base_bdevs_operational": 4, 00:12:23.404 "base_bdevs_list": [ 00:12:23.404 { 00:12:23.404 "name": "BaseBdev1", 00:12:23.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.404 "is_configured": false, 00:12:23.404 "data_offset": 0, 00:12:23.404 "data_size": 0 00:12:23.404 }, 00:12:23.404 { 00:12:23.404 "name": "BaseBdev2", 00:12:23.404 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:23.404 "is_configured": true, 00:12:23.404 "data_offset": 0, 00:12:23.404 "data_size": 65536 00:12:23.404 }, 00:12:23.404 { 00:12:23.404 "name": "BaseBdev3", 00:12:23.404 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:23.404 "is_configured": true, 00:12:23.404 "data_offset": 0, 00:12:23.404 "data_size": 65536 00:12:23.404 }, 00:12:23.404 { 00:12:23.404 "name": "BaseBdev4", 00:12:23.404 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:23.404 "is_configured": true, 00:12:23.404 "data_offset": 0, 00:12:23.404 "data_size": 65536 00:12:23.404 } 00:12:23.404 ] 00:12:23.404 }' 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.404 11:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 [2024-11-04 11:44:49.337563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.974 "name": "Existed_Raid", 00:12:23.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.974 "strip_size_kb": 0, 00:12:23.974 "state": "configuring", 00:12:23.974 "raid_level": "raid1", 00:12:23.974 "superblock": false, 00:12:23.974 "num_base_bdevs": 4, 00:12:23.974 "num_base_bdevs_discovered": 2, 00:12:23.974 "num_base_bdevs_operational": 4, 00:12:23.974 "base_bdevs_list": [ 00:12:23.974 { 00:12:23.974 "name": "BaseBdev1", 00:12:23.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.974 "is_configured": false, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 0 00:12:23.974 }, 00:12:23.974 { 00:12:23.974 "name": null, 00:12:23.974 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:23.974 "is_configured": false, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 65536 00:12:23.974 }, 00:12:23.974 { 00:12:23.974 "name": "BaseBdev3", 00:12:23.974 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:23.974 "is_configured": true, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 65536 00:12:23.974 }, 00:12:23.974 { 00:12:23.974 "name": "BaseBdev4", 00:12:23.974 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:23.974 "is_configured": true, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 65536 00:12:23.974 } 00:12:23.974 ] 00:12:23.974 }' 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.974 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 [2024-11-04 11:44:49.870010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.544 BaseBdev1 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 [ 00:12:24.544 { 00:12:24.544 "name": "BaseBdev1", 00:12:24.544 "aliases": [ 00:12:24.544 "be702678-8afa-4bd3-bf56-53b2da713816" 00:12:24.544 ], 00:12:24.544 "product_name": "Malloc disk", 00:12:24.544 "block_size": 512, 00:12:24.544 "num_blocks": 65536, 00:12:24.544 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:24.544 "assigned_rate_limits": { 00:12:24.544 "rw_ios_per_sec": 0, 00:12:24.544 "rw_mbytes_per_sec": 0, 00:12:24.544 "r_mbytes_per_sec": 0, 00:12:24.544 "w_mbytes_per_sec": 0 00:12:24.544 }, 00:12:24.544 "claimed": true, 00:12:24.544 "claim_type": "exclusive_write", 00:12:24.544 "zoned": false, 00:12:24.544 "supported_io_types": { 00:12:24.544 "read": true, 00:12:24.544 "write": true, 00:12:24.544 "unmap": true, 00:12:24.544 "flush": true, 00:12:24.544 "reset": true, 00:12:24.544 "nvme_admin": false, 00:12:24.544 "nvme_io": false, 00:12:24.544 "nvme_io_md": false, 00:12:24.544 "write_zeroes": true, 00:12:24.544 "zcopy": true, 00:12:24.544 "get_zone_info": false, 00:12:24.544 "zone_management": false, 00:12:24.544 "zone_append": false, 00:12:24.544 "compare": false, 00:12:24.544 "compare_and_write": false, 00:12:24.544 "abort": true, 00:12:24.544 "seek_hole": false, 00:12:24.544 "seek_data": false, 00:12:24.544 "copy": true, 00:12:24.544 "nvme_iov_md": false 00:12:24.544 }, 00:12:24.544 "memory_domains": [ 00:12:24.544 { 00:12:24.544 "dma_device_id": "system", 00:12:24.544 "dma_device_type": 1 00:12:24.544 }, 00:12:24.544 { 00:12:24.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.544 "dma_device_type": 2 00:12:24.544 } 00:12:24.544 ], 00:12:24.544 "driver_specific": {} 00:12:24.544 } 00:12:24.544 ] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.544 "name": "Existed_Raid", 00:12:24.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.544 "strip_size_kb": 0, 00:12:24.544 "state": "configuring", 00:12:24.544 "raid_level": "raid1", 00:12:24.544 "superblock": false, 00:12:24.544 "num_base_bdevs": 4, 00:12:24.544 "num_base_bdevs_discovered": 3, 00:12:24.544 "num_base_bdevs_operational": 4, 00:12:24.544 "base_bdevs_list": [ 00:12:24.544 { 00:12:24.544 "name": "BaseBdev1", 00:12:24.544 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:24.544 "is_configured": true, 00:12:24.544 "data_offset": 0, 00:12:24.544 "data_size": 65536 00:12:24.544 }, 00:12:24.544 { 00:12:24.544 "name": null, 00:12:24.544 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:24.544 "is_configured": false, 00:12:24.544 "data_offset": 0, 00:12:24.544 "data_size": 65536 00:12:24.544 }, 00:12:24.544 { 00:12:24.544 "name": "BaseBdev3", 00:12:24.544 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:24.544 "is_configured": true, 00:12:24.544 "data_offset": 0, 00:12:24.544 "data_size": 65536 00:12:24.544 }, 00:12:24.544 { 00:12:24.544 "name": "BaseBdev4", 00:12:24.544 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:24.544 "is_configured": true, 00:12:24.544 "data_offset": 0, 00:12:24.544 "data_size": 65536 00:12:24.544 } 00:12:24.544 ] 00:12:24.544 }' 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.544 11:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 [2024-11-04 11:44:50.377266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.114 "name": "Existed_Raid", 00:12:25.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.114 "strip_size_kb": 0, 00:12:25.114 "state": "configuring", 00:12:25.114 "raid_level": "raid1", 00:12:25.114 "superblock": false, 00:12:25.114 "num_base_bdevs": 4, 00:12:25.114 "num_base_bdevs_discovered": 2, 00:12:25.114 "num_base_bdevs_operational": 4, 00:12:25.114 "base_bdevs_list": [ 00:12:25.114 { 00:12:25.114 "name": "BaseBdev1", 00:12:25.114 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:25.114 "is_configured": true, 00:12:25.114 "data_offset": 0, 00:12:25.114 "data_size": 65536 00:12:25.114 }, 00:12:25.114 { 00:12:25.114 "name": null, 00:12:25.114 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:25.114 "is_configured": false, 00:12:25.114 "data_offset": 0, 00:12:25.114 "data_size": 65536 00:12:25.114 }, 00:12:25.114 { 00:12:25.114 "name": null, 00:12:25.114 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:25.114 "is_configured": false, 00:12:25.114 "data_offset": 0, 00:12:25.114 "data_size": 65536 00:12:25.114 }, 00:12:25.114 { 00:12:25.114 "name": "BaseBdev4", 00:12:25.114 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:25.114 "is_configured": true, 00:12:25.114 "data_offset": 0, 00:12:25.114 "data_size": 65536 00:12:25.114 } 00:12:25.114 ] 00:12:25.114 }' 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.114 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.373 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.373 [2024-11-04 11:44:50.892438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.633 "name": "Existed_Raid", 00:12:25.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.633 "strip_size_kb": 0, 00:12:25.633 "state": "configuring", 00:12:25.633 "raid_level": "raid1", 00:12:25.633 "superblock": false, 00:12:25.633 "num_base_bdevs": 4, 00:12:25.633 "num_base_bdevs_discovered": 3, 00:12:25.633 "num_base_bdevs_operational": 4, 00:12:25.633 "base_bdevs_list": [ 00:12:25.633 { 00:12:25.633 "name": "BaseBdev1", 00:12:25.633 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:25.633 "is_configured": true, 00:12:25.633 "data_offset": 0, 00:12:25.633 "data_size": 65536 00:12:25.633 }, 00:12:25.633 { 00:12:25.633 "name": null, 00:12:25.633 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:25.633 "is_configured": false, 00:12:25.633 "data_offset": 0, 00:12:25.633 "data_size": 65536 00:12:25.633 }, 00:12:25.633 { 00:12:25.633 "name": "BaseBdev3", 00:12:25.633 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:25.633 "is_configured": true, 00:12:25.633 "data_offset": 0, 00:12:25.633 "data_size": 65536 00:12:25.633 }, 00:12:25.633 { 00:12:25.633 "name": "BaseBdev4", 00:12:25.633 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:25.633 "is_configured": true, 00:12:25.633 "data_offset": 0, 00:12:25.633 "data_size": 65536 00:12:25.633 } 00:12:25.633 ] 00:12:25.633 }' 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.633 11:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.892 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.892 [2024-11-04 11:44:51.355929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.152 "name": "Existed_Raid", 00:12:26.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.152 "strip_size_kb": 0, 00:12:26.152 "state": "configuring", 00:12:26.152 "raid_level": "raid1", 00:12:26.152 "superblock": false, 00:12:26.152 "num_base_bdevs": 4, 00:12:26.152 "num_base_bdevs_discovered": 2, 00:12:26.152 "num_base_bdevs_operational": 4, 00:12:26.152 "base_bdevs_list": [ 00:12:26.152 { 00:12:26.152 "name": null, 00:12:26.152 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:26.152 "is_configured": false, 00:12:26.152 "data_offset": 0, 00:12:26.152 "data_size": 65536 00:12:26.152 }, 00:12:26.152 { 00:12:26.152 "name": null, 00:12:26.152 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:26.152 "is_configured": false, 00:12:26.152 "data_offset": 0, 00:12:26.152 "data_size": 65536 00:12:26.152 }, 00:12:26.152 { 00:12:26.152 "name": "BaseBdev3", 00:12:26.152 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:26.152 "is_configured": true, 00:12:26.152 "data_offset": 0, 00:12:26.152 "data_size": 65536 00:12:26.152 }, 00:12:26.152 { 00:12:26.152 "name": "BaseBdev4", 00:12:26.152 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:26.152 "is_configured": true, 00:12:26.152 "data_offset": 0, 00:12:26.152 "data_size": 65536 00:12:26.152 } 00:12:26.152 ] 00:12:26.152 }' 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.152 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.412 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.412 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:26.412 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.412 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.412 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 [2024-11-04 11:44:51.958661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.672 11:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.672 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.672 "name": "Existed_Raid", 00:12:26.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.672 "strip_size_kb": 0, 00:12:26.672 "state": "configuring", 00:12:26.672 "raid_level": "raid1", 00:12:26.672 "superblock": false, 00:12:26.672 "num_base_bdevs": 4, 00:12:26.672 "num_base_bdevs_discovered": 3, 00:12:26.672 "num_base_bdevs_operational": 4, 00:12:26.672 "base_bdevs_list": [ 00:12:26.672 { 00:12:26.672 "name": null, 00:12:26.672 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:26.672 "is_configured": false, 00:12:26.672 "data_offset": 0, 00:12:26.672 "data_size": 65536 00:12:26.672 }, 00:12:26.672 { 00:12:26.672 "name": "BaseBdev2", 00:12:26.672 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:26.672 "is_configured": true, 00:12:26.672 "data_offset": 0, 00:12:26.672 "data_size": 65536 00:12:26.672 }, 00:12:26.672 { 00:12:26.672 "name": "BaseBdev3", 00:12:26.672 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:26.672 "is_configured": true, 00:12:26.672 "data_offset": 0, 00:12:26.672 "data_size": 65536 00:12:26.672 }, 00:12:26.672 { 00:12:26.672 "name": "BaseBdev4", 00:12:26.672 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:26.672 "is_configured": true, 00:12:26.672 "data_offset": 0, 00:12:26.672 "data_size": 65536 00:12:26.672 } 00:12:26.672 ] 00:12:26.672 }' 00:12:26.672 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.672 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.932 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:26.932 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.932 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.932 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be702678-8afa-4bd3-bf56-53b2da713816 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 [2024-11-04 11:44:52.564941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:27.191 [2024-11-04 11:44:52.565090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:27.191 [2024-11-04 11:44:52.565125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:27.191 [2024-11-04 11:44:52.565527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:27.191 [2024-11-04 11:44:52.565776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:27.191 [2024-11-04 11:44:52.565823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:27.191 [2024-11-04 11:44:52.566195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.191 NewBaseBdev 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 [ 00:12:27.191 { 00:12:27.191 "name": "NewBaseBdev", 00:12:27.191 "aliases": [ 00:12:27.191 "be702678-8afa-4bd3-bf56-53b2da713816" 00:12:27.191 ], 00:12:27.191 "product_name": "Malloc disk", 00:12:27.191 "block_size": 512, 00:12:27.191 "num_blocks": 65536, 00:12:27.191 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:27.191 "assigned_rate_limits": { 00:12:27.191 "rw_ios_per_sec": 0, 00:12:27.191 "rw_mbytes_per_sec": 0, 00:12:27.191 "r_mbytes_per_sec": 0, 00:12:27.191 "w_mbytes_per_sec": 0 00:12:27.191 }, 00:12:27.191 "claimed": true, 00:12:27.191 "claim_type": "exclusive_write", 00:12:27.191 "zoned": false, 00:12:27.191 "supported_io_types": { 00:12:27.191 "read": true, 00:12:27.191 "write": true, 00:12:27.191 "unmap": true, 00:12:27.191 "flush": true, 00:12:27.191 "reset": true, 00:12:27.191 "nvme_admin": false, 00:12:27.191 "nvme_io": false, 00:12:27.191 "nvme_io_md": false, 00:12:27.191 "write_zeroes": true, 00:12:27.191 "zcopy": true, 00:12:27.191 "get_zone_info": false, 00:12:27.191 "zone_management": false, 00:12:27.191 "zone_append": false, 00:12:27.191 "compare": false, 00:12:27.191 "compare_and_write": false, 00:12:27.191 "abort": true, 00:12:27.191 "seek_hole": false, 00:12:27.191 "seek_data": false, 00:12:27.191 "copy": true, 00:12:27.191 "nvme_iov_md": false 00:12:27.191 }, 00:12:27.191 "memory_domains": [ 00:12:27.191 { 00:12:27.191 "dma_device_id": "system", 00:12:27.191 "dma_device_type": 1 00:12:27.191 }, 00:12:27.191 { 00:12:27.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.191 "dma_device_type": 2 00:12:27.191 } 00:12:27.191 ], 00:12:27.191 "driver_specific": {} 00:12:27.191 } 00:12:27.191 ] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.191 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.191 "name": "Existed_Raid", 00:12:27.191 "uuid": "10c0ff2b-0d46-4b5f-8a04-01a17033429e", 00:12:27.191 "strip_size_kb": 0, 00:12:27.191 "state": "online", 00:12:27.191 "raid_level": "raid1", 00:12:27.191 "superblock": false, 00:12:27.191 "num_base_bdevs": 4, 00:12:27.191 "num_base_bdevs_discovered": 4, 00:12:27.191 "num_base_bdevs_operational": 4, 00:12:27.191 "base_bdevs_list": [ 00:12:27.191 { 00:12:27.191 "name": "NewBaseBdev", 00:12:27.191 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:27.191 "is_configured": true, 00:12:27.191 "data_offset": 0, 00:12:27.191 "data_size": 65536 00:12:27.191 }, 00:12:27.191 { 00:12:27.191 "name": "BaseBdev2", 00:12:27.191 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:27.191 "is_configured": true, 00:12:27.191 "data_offset": 0, 00:12:27.191 "data_size": 65536 00:12:27.191 }, 00:12:27.191 { 00:12:27.191 "name": "BaseBdev3", 00:12:27.191 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:27.191 "is_configured": true, 00:12:27.191 "data_offset": 0, 00:12:27.192 "data_size": 65536 00:12:27.192 }, 00:12:27.192 { 00:12:27.192 "name": "BaseBdev4", 00:12:27.192 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:27.192 "is_configured": true, 00:12:27.192 "data_offset": 0, 00:12:27.192 "data_size": 65536 00:12:27.192 } 00:12:27.192 ] 00:12:27.192 }' 00:12:27.192 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.192 11:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.761 11:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.761 [2024-11-04 11:44:53.008653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.761 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.761 "name": "Existed_Raid", 00:12:27.761 "aliases": [ 00:12:27.761 "10c0ff2b-0d46-4b5f-8a04-01a17033429e" 00:12:27.761 ], 00:12:27.761 "product_name": "Raid Volume", 00:12:27.761 "block_size": 512, 00:12:27.761 "num_blocks": 65536, 00:12:27.761 "uuid": "10c0ff2b-0d46-4b5f-8a04-01a17033429e", 00:12:27.761 "assigned_rate_limits": { 00:12:27.761 "rw_ios_per_sec": 0, 00:12:27.761 "rw_mbytes_per_sec": 0, 00:12:27.761 "r_mbytes_per_sec": 0, 00:12:27.761 "w_mbytes_per_sec": 0 00:12:27.761 }, 00:12:27.761 "claimed": false, 00:12:27.761 "zoned": false, 00:12:27.761 "supported_io_types": { 00:12:27.761 "read": true, 00:12:27.761 "write": true, 00:12:27.761 "unmap": false, 00:12:27.761 "flush": false, 00:12:27.761 "reset": true, 00:12:27.761 "nvme_admin": false, 00:12:27.761 "nvme_io": false, 00:12:27.761 "nvme_io_md": false, 00:12:27.761 "write_zeroes": true, 00:12:27.761 "zcopy": false, 00:12:27.761 "get_zone_info": false, 00:12:27.761 "zone_management": false, 00:12:27.761 "zone_append": false, 00:12:27.761 "compare": false, 00:12:27.761 "compare_and_write": false, 00:12:27.761 "abort": false, 00:12:27.761 "seek_hole": false, 00:12:27.761 "seek_data": false, 00:12:27.761 "copy": false, 00:12:27.761 "nvme_iov_md": false 00:12:27.761 }, 00:12:27.761 "memory_domains": [ 00:12:27.761 { 00:12:27.761 "dma_device_id": "system", 00:12:27.761 "dma_device_type": 1 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.761 "dma_device_type": 2 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "system", 00:12:27.761 "dma_device_type": 1 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.761 "dma_device_type": 2 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "system", 00:12:27.761 "dma_device_type": 1 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.761 "dma_device_type": 2 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "system", 00:12:27.761 "dma_device_type": 1 00:12:27.761 }, 00:12:27.761 { 00:12:27.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.761 "dma_device_type": 2 00:12:27.761 } 00:12:27.761 ], 00:12:27.761 "driver_specific": { 00:12:27.761 "raid": { 00:12:27.761 "uuid": "10c0ff2b-0d46-4b5f-8a04-01a17033429e", 00:12:27.762 "strip_size_kb": 0, 00:12:27.762 "state": "online", 00:12:27.762 "raid_level": "raid1", 00:12:27.762 "superblock": false, 00:12:27.762 "num_base_bdevs": 4, 00:12:27.762 "num_base_bdevs_discovered": 4, 00:12:27.762 "num_base_bdevs_operational": 4, 00:12:27.762 "base_bdevs_list": [ 00:12:27.762 { 00:12:27.762 "name": "NewBaseBdev", 00:12:27.762 "uuid": "be702678-8afa-4bd3-bf56-53b2da713816", 00:12:27.762 "is_configured": true, 00:12:27.762 "data_offset": 0, 00:12:27.762 "data_size": 65536 00:12:27.762 }, 00:12:27.762 { 00:12:27.762 "name": "BaseBdev2", 00:12:27.762 "uuid": "f22cd7b5-68d5-457b-87e6-de78f6a5b4e1", 00:12:27.762 "is_configured": true, 00:12:27.762 "data_offset": 0, 00:12:27.762 "data_size": 65536 00:12:27.762 }, 00:12:27.762 { 00:12:27.762 "name": "BaseBdev3", 00:12:27.762 "uuid": "718c481b-7ee0-470a-be57-4fdf71067762", 00:12:27.762 "is_configured": true, 00:12:27.762 "data_offset": 0, 00:12:27.762 "data_size": 65536 00:12:27.762 }, 00:12:27.762 { 00:12:27.762 "name": "BaseBdev4", 00:12:27.762 "uuid": "cf7d8127-3843-49cf-aaa5-a591d7d6c434", 00:12:27.762 "is_configured": true, 00:12:27.762 "data_offset": 0, 00:12:27.762 "data_size": 65536 00:12:27.762 } 00:12:27.762 ] 00:12:27.762 } 00:12:27.762 } 00:12:27.762 }' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:27.762 BaseBdev2 00:12:27.762 BaseBdev3 00:12:27.762 BaseBdev4' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.762 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.021 [2024-11-04 11:44:53.327743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.021 [2024-11-04 11:44:53.327813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.021 [2024-11-04 11:44:53.327961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.021 [2024-11-04 11:44:53.328323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.021 [2024-11-04 11:44:53.328390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73416 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73416 ']' 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73416 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73416 00:12:28.021 killing process with pid 73416 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73416' 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73416 00:12:28.021 11:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73416 00:12:28.021 [2024-11-04 11:44:53.369133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.280 [2024-11-04 11:44:53.790139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.659 11:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:29.659 00:12:29.659 real 0m11.791s 00:12:29.659 user 0m18.701s 00:12:29.659 sys 0m2.031s 00:12:29.659 11:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.659 11:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 ************************************ 00:12:29.659 END TEST raid_state_function_test 00:12:29.659 ************************************ 00:12:29.659 11:44:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:29.659 11:44:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:29.659 11:44:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.659 11:44:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 ************************************ 00:12:29.659 START TEST raid_state_function_test_sb 00:12:29.659 ************************************ 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:29.659 Process raid pid: 74093 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74093 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74093' 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74093 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74093 ']' 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:29.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:29.659 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.659 [2024-11-04 11:44:55.127930] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:29.659 [2024-11-04 11:44:55.128175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.918 [2024-11-04 11:44:55.306420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.918 [2024-11-04 11:44:55.425396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.177 [2024-11-04 11:44:55.638685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.177 [2024-11-04 11:44:55.638816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.745 [2024-11-04 11:44:55.995739] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.745 [2024-11-04 11:44:55.995792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.745 [2024-11-04 11:44:55.995802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.745 [2024-11-04 11:44:55.995812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.745 [2024-11-04 11:44:55.995820] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.745 [2024-11-04 11:44:55.995828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.745 [2024-11-04 11:44:55.995835] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.745 [2024-11-04 11:44:55.995843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.745 11:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.745 "name": "Existed_Raid", 00:12:30.745 "uuid": "3aa17f02-da51-4331-b7b9-fa51d5d189e3", 00:12:30.745 "strip_size_kb": 0, 00:12:30.745 "state": "configuring", 00:12:30.745 "raid_level": "raid1", 00:12:30.745 "superblock": true, 00:12:30.745 "num_base_bdevs": 4, 00:12:30.745 "num_base_bdevs_discovered": 0, 00:12:30.745 "num_base_bdevs_operational": 4, 00:12:30.745 "base_bdevs_list": [ 00:12:30.745 { 00:12:30.745 "name": "BaseBdev1", 00:12:30.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.745 "is_configured": false, 00:12:30.745 "data_offset": 0, 00:12:30.745 "data_size": 0 00:12:30.745 }, 00:12:30.745 { 00:12:30.745 "name": "BaseBdev2", 00:12:30.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.745 "is_configured": false, 00:12:30.745 "data_offset": 0, 00:12:30.745 "data_size": 0 00:12:30.745 }, 00:12:30.745 { 00:12:30.745 "name": "BaseBdev3", 00:12:30.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.745 "is_configured": false, 00:12:30.745 "data_offset": 0, 00:12:30.745 "data_size": 0 00:12:30.745 }, 00:12:30.745 { 00:12:30.745 "name": "BaseBdev4", 00:12:30.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.745 "is_configured": false, 00:12:30.745 "data_offset": 0, 00:12:30.745 "data_size": 0 00:12:30.745 } 00:12:30.745 ] 00:12:30.745 }' 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.745 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.003 [2024-11-04 11:44:56.438957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.003 [2024-11-04 11:44:56.439078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.003 [2024-11-04 11:44:56.450934] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.003 [2024-11-04 11:44:56.450986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.003 [2024-11-04 11:44:56.450996] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.003 [2024-11-04 11:44:56.451006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.003 [2024-11-04 11:44:56.451013] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.003 [2024-11-04 11:44:56.451022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.003 [2024-11-04 11:44:56.451028] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.003 [2024-11-04 11:44:56.451036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.003 [2024-11-04 11:44:56.501778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.003 BaseBdev1 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.003 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.263 [ 00:12:31.263 { 00:12:31.263 "name": "BaseBdev1", 00:12:31.263 "aliases": [ 00:12:31.263 "a1ac5789-f785-4ba8-935e-2fd700768f3b" 00:12:31.263 ], 00:12:31.263 "product_name": "Malloc disk", 00:12:31.263 "block_size": 512, 00:12:31.263 "num_blocks": 65536, 00:12:31.263 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:31.263 "assigned_rate_limits": { 00:12:31.263 "rw_ios_per_sec": 0, 00:12:31.263 "rw_mbytes_per_sec": 0, 00:12:31.263 "r_mbytes_per_sec": 0, 00:12:31.263 "w_mbytes_per_sec": 0 00:12:31.263 }, 00:12:31.263 "claimed": true, 00:12:31.263 "claim_type": "exclusive_write", 00:12:31.263 "zoned": false, 00:12:31.263 "supported_io_types": { 00:12:31.263 "read": true, 00:12:31.263 "write": true, 00:12:31.263 "unmap": true, 00:12:31.263 "flush": true, 00:12:31.263 "reset": true, 00:12:31.263 "nvme_admin": false, 00:12:31.263 "nvme_io": false, 00:12:31.263 "nvme_io_md": false, 00:12:31.263 "write_zeroes": true, 00:12:31.263 "zcopy": true, 00:12:31.263 "get_zone_info": false, 00:12:31.263 "zone_management": false, 00:12:31.263 "zone_append": false, 00:12:31.263 "compare": false, 00:12:31.263 "compare_and_write": false, 00:12:31.263 "abort": true, 00:12:31.263 "seek_hole": false, 00:12:31.263 "seek_data": false, 00:12:31.263 "copy": true, 00:12:31.263 "nvme_iov_md": false 00:12:31.263 }, 00:12:31.263 "memory_domains": [ 00:12:31.263 { 00:12:31.263 "dma_device_id": "system", 00:12:31.263 "dma_device_type": 1 00:12:31.263 }, 00:12:31.263 { 00:12:31.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.263 "dma_device_type": 2 00:12:31.263 } 00:12:31.263 ], 00:12:31.263 "driver_specific": {} 00:12:31.263 } 00:12:31.263 ] 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.263 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.264 "name": "Existed_Raid", 00:12:31.264 "uuid": "93cd9e51-1498-4198-b761-babf4e485ffb", 00:12:31.264 "strip_size_kb": 0, 00:12:31.264 "state": "configuring", 00:12:31.264 "raid_level": "raid1", 00:12:31.264 "superblock": true, 00:12:31.264 "num_base_bdevs": 4, 00:12:31.264 "num_base_bdevs_discovered": 1, 00:12:31.264 "num_base_bdevs_operational": 4, 00:12:31.264 "base_bdevs_list": [ 00:12:31.264 { 00:12:31.264 "name": "BaseBdev1", 00:12:31.264 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:31.264 "is_configured": true, 00:12:31.264 "data_offset": 2048, 00:12:31.264 "data_size": 63488 00:12:31.264 }, 00:12:31.264 { 00:12:31.264 "name": "BaseBdev2", 00:12:31.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.264 "is_configured": false, 00:12:31.264 "data_offset": 0, 00:12:31.264 "data_size": 0 00:12:31.264 }, 00:12:31.264 { 00:12:31.264 "name": "BaseBdev3", 00:12:31.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.264 "is_configured": false, 00:12:31.264 "data_offset": 0, 00:12:31.264 "data_size": 0 00:12:31.264 }, 00:12:31.264 { 00:12:31.264 "name": "BaseBdev4", 00:12:31.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.264 "is_configured": false, 00:12:31.264 "data_offset": 0, 00:12:31.264 "data_size": 0 00:12:31.264 } 00:12:31.264 ] 00:12:31.264 }' 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.264 11:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.528 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.528 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.529 [2024-11-04 11:44:57.028971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.529 [2024-11-04 11:44:57.029105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.529 [2024-11-04 11:44:57.041004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.529 [2024-11-04 11:44:57.043041] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.529 [2024-11-04 11:44:57.043129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.529 [2024-11-04 11:44:57.043145] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.529 [2024-11-04 11:44:57.043159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.529 [2024-11-04 11:44:57.043167] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.529 [2024-11-04 11:44:57.043176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.529 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.789 "name": "Existed_Raid", 00:12:31.789 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:31.789 "strip_size_kb": 0, 00:12:31.789 "state": "configuring", 00:12:31.789 "raid_level": "raid1", 00:12:31.789 "superblock": true, 00:12:31.789 "num_base_bdevs": 4, 00:12:31.789 "num_base_bdevs_discovered": 1, 00:12:31.789 "num_base_bdevs_operational": 4, 00:12:31.789 "base_bdevs_list": [ 00:12:31.789 { 00:12:31.789 "name": "BaseBdev1", 00:12:31.789 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:31.789 "is_configured": true, 00:12:31.789 "data_offset": 2048, 00:12:31.789 "data_size": 63488 00:12:31.789 }, 00:12:31.789 { 00:12:31.789 "name": "BaseBdev2", 00:12:31.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.789 "is_configured": false, 00:12:31.789 "data_offset": 0, 00:12:31.789 "data_size": 0 00:12:31.789 }, 00:12:31.789 { 00:12:31.789 "name": "BaseBdev3", 00:12:31.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.789 "is_configured": false, 00:12:31.789 "data_offset": 0, 00:12:31.789 "data_size": 0 00:12:31.789 }, 00:12:31.789 { 00:12:31.789 "name": "BaseBdev4", 00:12:31.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.789 "is_configured": false, 00:12:31.789 "data_offset": 0, 00:12:31.789 "data_size": 0 00:12:31.789 } 00:12:31.789 ] 00:12:31.789 }' 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.789 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.049 [2024-11-04 11:44:57.528618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.049 BaseBdev2 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.049 [ 00:12:32.049 { 00:12:32.049 "name": "BaseBdev2", 00:12:32.049 "aliases": [ 00:12:32.049 "23695d3e-befb-472e-b151-9b5a7ccd1e2e" 00:12:32.049 ], 00:12:32.049 "product_name": "Malloc disk", 00:12:32.049 "block_size": 512, 00:12:32.049 "num_blocks": 65536, 00:12:32.049 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:32.049 "assigned_rate_limits": { 00:12:32.049 "rw_ios_per_sec": 0, 00:12:32.049 "rw_mbytes_per_sec": 0, 00:12:32.049 "r_mbytes_per_sec": 0, 00:12:32.049 "w_mbytes_per_sec": 0 00:12:32.049 }, 00:12:32.049 "claimed": true, 00:12:32.049 "claim_type": "exclusive_write", 00:12:32.049 "zoned": false, 00:12:32.049 "supported_io_types": { 00:12:32.049 "read": true, 00:12:32.049 "write": true, 00:12:32.049 "unmap": true, 00:12:32.049 "flush": true, 00:12:32.049 "reset": true, 00:12:32.049 "nvme_admin": false, 00:12:32.049 "nvme_io": false, 00:12:32.049 "nvme_io_md": false, 00:12:32.049 "write_zeroes": true, 00:12:32.049 "zcopy": true, 00:12:32.049 "get_zone_info": false, 00:12:32.049 "zone_management": false, 00:12:32.049 "zone_append": false, 00:12:32.049 "compare": false, 00:12:32.049 "compare_and_write": false, 00:12:32.049 "abort": true, 00:12:32.049 "seek_hole": false, 00:12:32.049 "seek_data": false, 00:12:32.049 "copy": true, 00:12:32.049 "nvme_iov_md": false 00:12:32.049 }, 00:12:32.049 "memory_domains": [ 00:12:32.049 { 00:12:32.049 "dma_device_id": "system", 00:12:32.049 "dma_device_type": 1 00:12:32.049 }, 00:12:32.049 { 00:12:32.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.049 "dma_device_type": 2 00:12:32.049 } 00:12:32.049 ], 00:12:32.049 "driver_specific": {} 00:12:32.049 } 00:12:32.049 ] 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.049 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.309 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.309 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.309 "name": "Existed_Raid", 00:12:32.309 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:32.309 "strip_size_kb": 0, 00:12:32.309 "state": "configuring", 00:12:32.309 "raid_level": "raid1", 00:12:32.309 "superblock": true, 00:12:32.309 "num_base_bdevs": 4, 00:12:32.309 "num_base_bdevs_discovered": 2, 00:12:32.309 "num_base_bdevs_operational": 4, 00:12:32.309 "base_bdevs_list": [ 00:12:32.309 { 00:12:32.309 "name": "BaseBdev1", 00:12:32.309 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:32.309 "is_configured": true, 00:12:32.309 "data_offset": 2048, 00:12:32.309 "data_size": 63488 00:12:32.309 }, 00:12:32.309 { 00:12:32.309 "name": "BaseBdev2", 00:12:32.309 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:32.309 "is_configured": true, 00:12:32.309 "data_offset": 2048, 00:12:32.309 "data_size": 63488 00:12:32.309 }, 00:12:32.309 { 00:12:32.309 "name": "BaseBdev3", 00:12:32.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.309 "is_configured": false, 00:12:32.309 "data_offset": 0, 00:12:32.309 "data_size": 0 00:12:32.309 }, 00:12:32.309 { 00:12:32.309 "name": "BaseBdev4", 00:12:32.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.309 "is_configured": false, 00:12:32.309 "data_offset": 0, 00:12:32.309 "data_size": 0 00:12:32.309 } 00:12:32.309 ] 00:12:32.309 }' 00:12:32.309 11:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.309 11:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.568 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:32.568 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.568 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 [2024-11-04 11:44:58.111140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.828 BaseBdev3 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 [ 00:12:32.828 { 00:12:32.828 "name": "BaseBdev3", 00:12:32.828 "aliases": [ 00:12:32.828 "49642e11-5943-43df-9617-38498d102cfb" 00:12:32.828 ], 00:12:32.828 "product_name": "Malloc disk", 00:12:32.828 "block_size": 512, 00:12:32.828 "num_blocks": 65536, 00:12:32.828 "uuid": "49642e11-5943-43df-9617-38498d102cfb", 00:12:32.828 "assigned_rate_limits": { 00:12:32.828 "rw_ios_per_sec": 0, 00:12:32.828 "rw_mbytes_per_sec": 0, 00:12:32.828 "r_mbytes_per_sec": 0, 00:12:32.828 "w_mbytes_per_sec": 0 00:12:32.828 }, 00:12:32.828 "claimed": true, 00:12:32.828 "claim_type": "exclusive_write", 00:12:32.828 "zoned": false, 00:12:32.828 "supported_io_types": { 00:12:32.828 "read": true, 00:12:32.828 "write": true, 00:12:32.828 "unmap": true, 00:12:32.828 "flush": true, 00:12:32.828 "reset": true, 00:12:32.828 "nvme_admin": false, 00:12:32.828 "nvme_io": false, 00:12:32.828 "nvme_io_md": false, 00:12:32.828 "write_zeroes": true, 00:12:32.828 "zcopy": true, 00:12:32.828 "get_zone_info": false, 00:12:32.828 "zone_management": false, 00:12:32.828 "zone_append": false, 00:12:32.828 "compare": false, 00:12:32.828 "compare_and_write": false, 00:12:32.828 "abort": true, 00:12:32.828 "seek_hole": false, 00:12:32.828 "seek_data": false, 00:12:32.828 "copy": true, 00:12:32.828 "nvme_iov_md": false 00:12:32.828 }, 00:12:32.828 "memory_domains": [ 00:12:32.828 { 00:12:32.828 "dma_device_id": "system", 00:12:32.828 "dma_device_type": 1 00:12:32.828 }, 00:12:32.828 { 00:12:32.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.828 "dma_device_type": 2 00:12:32.828 } 00:12:32.828 ], 00:12:32.828 "driver_specific": {} 00:12:32.828 } 00:12:32.828 ] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.828 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.828 "name": "Existed_Raid", 00:12:32.828 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:32.828 "strip_size_kb": 0, 00:12:32.828 "state": "configuring", 00:12:32.828 "raid_level": "raid1", 00:12:32.828 "superblock": true, 00:12:32.828 "num_base_bdevs": 4, 00:12:32.828 "num_base_bdevs_discovered": 3, 00:12:32.828 "num_base_bdevs_operational": 4, 00:12:32.828 "base_bdevs_list": [ 00:12:32.828 { 00:12:32.828 "name": "BaseBdev1", 00:12:32.828 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:32.828 "is_configured": true, 00:12:32.828 "data_offset": 2048, 00:12:32.828 "data_size": 63488 00:12:32.828 }, 00:12:32.828 { 00:12:32.828 "name": "BaseBdev2", 00:12:32.828 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:32.828 "is_configured": true, 00:12:32.829 "data_offset": 2048, 00:12:32.829 "data_size": 63488 00:12:32.829 }, 00:12:32.829 { 00:12:32.829 "name": "BaseBdev3", 00:12:32.829 "uuid": "49642e11-5943-43df-9617-38498d102cfb", 00:12:32.829 "is_configured": true, 00:12:32.829 "data_offset": 2048, 00:12:32.829 "data_size": 63488 00:12:32.829 }, 00:12:32.829 { 00:12:32.829 "name": "BaseBdev4", 00:12:32.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.829 "is_configured": false, 00:12:32.829 "data_offset": 0, 00:12:32.829 "data_size": 0 00:12:32.829 } 00:12:32.829 ] 00:12:32.829 }' 00:12:32.829 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.829 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 [2024-11-04 11:44:58.681686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.397 [2024-11-04 11:44:58.682151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:33.397 [2024-11-04 11:44:58.682207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:33.397 [2024-11-04 11:44:58.682576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.397 BaseBdev4 00:12:33.397 [2024-11-04 11:44:58.682792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:33.397 [2024-11-04 11:44:58.682810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:33.397 [2024-11-04 11:44:58.682997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 [ 00:12:33.397 { 00:12:33.397 "name": "BaseBdev4", 00:12:33.397 "aliases": [ 00:12:33.397 "7ea195f0-4778-4542-a564-45dd07438868" 00:12:33.397 ], 00:12:33.397 "product_name": "Malloc disk", 00:12:33.397 "block_size": 512, 00:12:33.397 "num_blocks": 65536, 00:12:33.397 "uuid": "7ea195f0-4778-4542-a564-45dd07438868", 00:12:33.397 "assigned_rate_limits": { 00:12:33.397 "rw_ios_per_sec": 0, 00:12:33.397 "rw_mbytes_per_sec": 0, 00:12:33.397 "r_mbytes_per_sec": 0, 00:12:33.397 "w_mbytes_per_sec": 0 00:12:33.397 }, 00:12:33.397 "claimed": true, 00:12:33.397 "claim_type": "exclusive_write", 00:12:33.397 "zoned": false, 00:12:33.397 "supported_io_types": { 00:12:33.397 "read": true, 00:12:33.397 "write": true, 00:12:33.397 "unmap": true, 00:12:33.397 "flush": true, 00:12:33.397 "reset": true, 00:12:33.397 "nvme_admin": false, 00:12:33.397 "nvme_io": false, 00:12:33.397 "nvme_io_md": false, 00:12:33.397 "write_zeroes": true, 00:12:33.397 "zcopy": true, 00:12:33.397 "get_zone_info": false, 00:12:33.397 "zone_management": false, 00:12:33.397 "zone_append": false, 00:12:33.397 "compare": false, 00:12:33.397 "compare_and_write": false, 00:12:33.397 "abort": true, 00:12:33.397 "seek_hole": false, 00:12:33.397 "seek_data": false, 00:12:33.397 "copy": true, 00:12:33.397 "nvme_iov_md": false 00:12:33.397 }, 00:12:33.397 "memory_domains": [ 00:12:33.397 { 00:12:33.397 "dma_device_id": "system", 00:12:33.397 "dma_device_type": 1 00:12:33.397 }, 00:12:33.397 { 00:12:33.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.397 "dma_device_type": 2 00:12:33.397 } 00:12:33.397 ], 00:12:33.397 "driver_specific": {} 00:12:33.397 } 00:12:33.397 ] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.397 "name": "Existed_Raid", 00:12:33.397 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:33.397 "strip_size_kb": 0, 00:12:33.397 "state": "online", 00:12:33.397 "raid_level": "raid1", 00:12:33.397 "superblock": true, 00:12:33.397 "num_base_bdevs": 4, 00:12:33.397 "num_base_bdevs_discovered": 4, 00:12:33.397 "num_base_bdevs_operational": 4, 00:12:33.397 "base_bdevs_list": [ 00:12:33.397 { 00:12:33.397 "name": "BaseBdev1", 00:12:33.397 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:33.397 "is_configured": true, 00:12:33.397 "data_offset": 2048, 00:12:33.397 "data_size": 63488 00:12:33.397 }, 00:12:33.397 { 00:12:33.397 "name": "BaseBdev2", 00:12:33.397 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:33.397 "is_configured": true, 00:12:33.397 "data_offset": 2048, 00:12:33.397 "data_size": 63488 00:12:33.397 }, 00:12:33.397 { 00:12:33.397 "name": "BaseBdev3", 00:12:33.397 "uuid": "49642e11-5943-43df-9617-38498d102cfb", 00:12:33.397 "is_configured": true, 00:12:33.397 "data_offset": 2048, 00:12:33.397 "data_size": 63488 00:12:33.397 }, 00:12:33.397 { 00:12:33.397 "name": "BaseBdev4", 00:12:33.397 "uuid": "7ea195f0-4778-4542-a564-45dd07438868", 00:12:33.397 "is_configured": true, 00:12:33.397 "data_offset": 2048, 00:12:33.397 "data_size": 63488 00:12:33.397 } 00:12:33.397 ] 00:12:33.397 }' 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.397 11:44:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.656 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:33.656 [2024-11-04 11:44:59.165437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.915 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.915 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:33.915 "name": "Existed_Raid", 00:12:33.915 "aliases": [ 00:12:33.915 "4bbbfead-d37d-4b19-9f91-778535da3b8b" 00:12:33.915 ], 00:12:33.915 "product_name": "Raid Volume", 00:12:33.915 "block_size": 512, 00:12:33.915 "num_blocks": 63488, 00:12:33.915 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:33.915 "assigned_rate_limits": { 00:12:33.915 "rw_ios_per_sec": 0, 00:12:33.915 "rw_mbytes_per_sec": 0, 00:12:33.915 "r_mbytes_per_sec": 0, 00:12:33.915 "w_mbytes_per_sec": 0 00:12:33.915 }, 00:12:33.915 "claimed": false, 00:12:33.915 "zoned": false, 00:12:33.915 "supported_io_types": { 00:12:33.915 "read": true, 00:12:33.915 "write": true, 00:12:33.915 "unmap": false, 00:12:33.915 "flush": false, 00:12:33.915 "reset": true, 00:12:33.915 "nvme_admin": false, 00:12:33.915 "nvme_io": false, 00:12:33.915 "nvme_io_md": false, 00:12:33.915 "write_zeroes": true, 00:12:33.915 "zcopy": false, 00:12:33.915 "get_zone_info": false, 00:12:33.915 "zone_management": false, 00:12:33.915 "zone_append": false, 00:12:33.915 "compare": false, 00:12:33.915 "compare_and_write": false, 00:12:33.915 "abort": false, 00:12:33.915 "seek_hole": false, 00:12:33.915 "seek_data": false, 00:12:33.915 "copy": false, 00:12:33.915 "nvme_iov_md": false 00:12:33.915 }, 00:12:33.915 "memory_domains": [ 00:12:33.915 { 00:12:33.915 "dma_device_id": "system", 00:12:33.915 "dma_device_type": 1 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.915 "dma_device_type": 2 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "system", 00:12:33.915 "dma_device_type": 1 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.915 "dma_device_type": 2 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "system", 00:12:33.915 "dma_device_type": 1 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.915 "dma_device_type": 2 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "system", 00:12:33.915 "dma_device_type": 1 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.915 "dma_device_type": 2 00:12:33.915 } 00:12:33.915 ], 00:12:33.915 "driver_specific": { 00:12:33.915 "raid": { 00:12:33.915 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:33.915 "strip_size_kb": 0, 00:12:33.915 "state": "online", 00:12:33.915 "raid_level": "raid1", 00:12:33.915 "superblock": true, 00:12:33.915 "num_base_bdevs": 4, 00:12:33.915 "num_base_bdevs_discovered": 4, 00:12:33.915 "num_base_bdevs_operational": 4, 00:12:33.915 "base_bdevs_list": [ 00:12:33.915 { 00:12:33.915 "name": "BaseBdev1", 00:12:33.915 "uuid": "a1ac5789-f785-4ba8-935e-2fd700768f3b", 00:12:33.915 "is_configured": true, 00:12:33.915 "data_offset": 2048, 00:12:33.915 "data_size": 63488 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "name": "BaseBdev2", 00:12:33.915 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:33.915 "is_configured": true, 00:12:33.915 "data_offset": 2048, 00:12:33.915 "data_size": 63488 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "name": "BaseBdev3", 00:12:33.915 "uuid": "49642e11-5943-43df-9617-38498d102cfb", 00:12:33.915 "is_configured": true, 00:12:33.915 "data_offset": 2048, 00:12:33.915 "data_size": 63488 00:12:33.915 }, 00:12:33.915 { 00:12:33.915 "name": "BaseBdev4", 00:12:33.915 "uuid": "7ea195f0-4778-4542-a564-45dd07438868", 00:12:33.915 "is_configured": true, 00:12:33.916 "data_offset": 2048, 00:12:33.916 "data_size": 63488 00:12:33.916 } 00:12:33.916 ] 00:12:33.916 } 00:12:33.916 } 00:12:33.916 }' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:33.916 BaseBdev2 00:12:33.916 BaseBdev3 00:12:33.916 BaseBdev4' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.916 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.174 [2024-11-04 11:44:59.440577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.174 "name": "Existed_Raid", 00:12:34.174 "uuid": "4bbbfead-d37d-4b19-9f91-778535da3b8b", 00:12:34.174 "strip_size_kb": 0, 00:12:34.174 "state": "online", 00:12:34.174 "raid_level": "raid1", 00:12:34.174 "superblock": true, 00:12:34.174 "num_base_bdevs": 4, 00:12:34.174 "num_base_bdevs_discovered": 3, 00:12:34.174 "num_base_bdevs_operational": 3, 00:12:34.174 "base_bdevs_list": [ 00:12:34.174 { 00:12:34.174 "name": null, 00:12:34.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.174 "is_configured": false, 00:12:34.174 "data_offset": 0, 00:12:34.174 "data_size": 63488 00:12:34.174 }, 00:12:34.174 { 00:12:34.174 "name": "BaseBdev2", 00:12:34.174 "uuid": "23695d3e-befb-472e-b151-9b5a7ccd1e2e", 00:12:34.174 "is_configured": true, 00:12:34.174 "data_offset": 2048, 00:12:34.174 "data_size": 63488 00:12:34.174 }, 00:12:34.174 { 00:12:34.174 "name": "BaseBdev3", 00:12:34.174 "uuid": "49642e11-5943-43df-9617-38498d102cfb", 00:12:34.174 "is_configured": true, 00:12:34.174 "data_offset": 2048, 00:12:34.174 "data_size": 63488 00:12:34.174 }, 00:12:34.174 { 00:12:34.174 "name": "BaseBdev4", 00:12:34.174 "uuid": "7ea195f0-4778-4542-a564-45dd07438868", 00:12:34.174 "is_configured": true, 00:12:34.174 "data_offset": 2048, 00:12:34.174 "data_size": 63488 00:12:34.174 } 00:12:34.174 ] 00:12:34.174 }' 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.174 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.739 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:34.739 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.740 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.740 11:44:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.740 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.740 11:44:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.740 [2024-11-04 11:45:00.032428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.740 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.740 [2024-11-04 11:45:00.196413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.998 [2024-11-04 11:45:00.366059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:34.998 [2024-11-04 11:45:00.366196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.998 [2024-11-04 11:45:00.475468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.998 [2024-11-04 11:45:00.475644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.998 [2024-11-04 11:45:00.475689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.998 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 BaseBdev2 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 [ 00:12:35.286 { 00:12:35.286 "name": "BaseBdev2", 00:12:35.286 "aliases": [ 00:12:35.286 "eae7b633-962d-4469-aed1-fdd23697598b" 00:12:35.286 ], 00:12:35.286 "product_name": "Malloc disk", 00:12:35.286 "block_size": 512, 00:12:35.286 "num_blocks": 65536, 00:12:35.286 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:35.286 "assigned_rate_limits": { 00:12:35.286 "rw_ios_per_sec": 0, 00:12:35.286 "rw_mbytes_per_sec": 0, 00:12:35.286 "r_mbytes_per_sec": 0, 00:12:35.286 "w_mbytes_per_sec": 0 00:12:35.286 }, 00:12:35.286 "claimed": false, 00:12:35.286 "zoned": false, 00:12:35.286 "supported_io_types": { 00:12:35.286 "read": true, 00:12:35.286 "write": true, 00:12:35.286 "unmap": true, 00:12:35.286 "flush": true, 00:12:35.286 "reset": true, 00:12:35.286 "nvme_admin": false, 00:12:35.286 "nvme_io": false, 00:12:35.286 "nvme_io_md": false, 00:12:35.286 "write_zeroes": true, 00:12:35.286 "zcopy": true, 00:12:35.286 "get_zone_info": false, 00:12:35.286 "zone_management": false, 00:12:35.286 "zone_append": false, 00:12:35.286 "compare": false, 00:12:35.286 "compare_and_write": false, 00:12:35.286 "abort": true, 00:12:35.286 "seek_hole": false, 00:12:35.286 "seek_data": false, 00:12:35.286 "copy": true, 00:12:35.286 "nvme_iov_md": false 00:12:35.286 }, 00:12:35.286 "memory_domains": [ 00:12:35.286 { 00:12:35.286 "dma_device_id": "system", 00:12:35.286 "dma_device_type": 1 00:12:35.286 }, 00:12:35.286 { 00:12:35.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.286 "dma_device_type": 2 00:12:35.286 } 00:12:35.286 ], 00:12:35.286 "driver_specific": {} 00:12:35.286 } 00:12:35.286 ] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 BaseBdev3 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 [ 00:12:35.286 { 00:12:35.286 "name": "BaseBdev3", 00:12:35.286 "aliases": [ 00:12:35.286 "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2" 00:12:35.286 ], 00:12:35.286 "product_name": "Malloc disk", 00:12:35.286 "block_size": 512, 00:12:35.286 "num_blocks": 65536, 00:12:35.286 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:35.286 "assigned_rate_limits": { 00:12:35.286 "rw_ios_per_sec": 0, 00:12:35.286 "rw_mbytes_per_sec": 0, 00:12:35.286 "r_mbytes_per_sec": 0, 00:12:35.286 "w_mbytes_per_sec": 0 00:12:35.286 }, 00:12:35.286 "claimed": false, 00:12:35.286 "zoned": false, 00:12:35.286 "supported_io_types": { 00:12:35.286 "read": true, 00:12:35.286 "write": true, 00:12:35.286 "unmap": true, 00:12:35.286 "flush": true, 00:12:35.286 "reset": true, 00:12:35.286 "nvme_admin": false, 00:12:35.286 "nvme_io": false, 00:12:35.286 "nvme_io_md": false, 00:12:35.286 "write_zeroes": true, 00:12:35.286 "zcopy": true, 00:12:35.286 "get_zone_info": false, 00:12:35.286 "zone_management": false, 00:12:35.286 "zone_append": false, 00:12:35.286 "compare": false, 00:12:35.286 "compare_and_write": false, 00:12:35.286 "abort": true, 00:12:35.286 "seek_hole": false, 00:12:35.286 "seek_data": false, 00:12:35.286 "copy": true, 00:12:35.286 "nvme_iov_md": false 00:12:35.286 }, 00:12:35.286 "memory_domains": [ 00:12:35.286 { 00:12:35.286 "dma_device_id": "system", 00:12:35.286 "dma_device_type": 1 00:12:35.286 }, 00:12:35.286 { 00:12:35.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.286 "dma_device_type": 2 00:12:35.286 } 00:12:35.286 ], 00:12:35.286 "driver_specific": {} 00:12:35.286 } 00:12:35.286 ] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 BaseBdev4 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 [ 00:12:35.286 { 00:12:35.286 "name": "BaseBdev4", 00:12:35.286 "aliases": [ 00:12:35.286 "953ad104-9277-47e7-8422-4923351b3048" 00:12:35.286 ], 00:12:35.286 "product_name": "Malloc disk", 00:12:35.286 "block_size": 512, 00:12:35.286 "num_blocks": 65536, 00:12:35.286 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:35.286 "assigned_rate_limits": { 00:12:35.286 "rw_ios_per_sec": 0, 00:12:35.286 "rw_mbytes_per_sec": 0, 00:12:35.286 "r_mbytes_per_sec": 0, 00:12:35.286 "w_mbytes_per_sec": 0 00:12:35.286 }, 00:12:35.286 "claimed": false, 00:12:35.286 "zoned": false, 00:12:35.286 "supported_io_types": { 00:12:35.286 "read": true, 00:12:35.286 "write": true, 00:12:35.286 "unmap": true, 00:12:35.286 "flush": true, 00:12:35.286 "reset": true, 00:12:35.286 "nvme_admin": false, 00:12:35.286 "nvme_io": false, 00:12:35.286 "nvme_io_md": false, 00:12:35.286 "write_zeroes": true, 00:12:35.286 "zcopy": true, 00:12:35.286 "get_zone_info": false, 00:12:35.286 "zone_management": false, 00:12:35.286 "zone_append": false, 00:12:35.286 "compare": false, 00:12:35.286 "compare_and_write": false, 00:12:35.286 "abort": true, 00:12:35.286 "seek_hole": false, 00:12:35.286 "seek_data": false, 00:12:35.286 "copy": true, 00:12:35.286 "nvme_iov_md": false 00:12:35.286 }, 00:12:35.286 "memory_domains": [ 00:12:35.286 { 00:12:35.286 "dma_device_id": "system", 00:12:35.286 "dma_device_type": 1 00:12:35.286 }, 00:12:35.286 { 00:12:35.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.286 "dma_device_type": 2 00:12:35.286 } 00:12:35.286 ], 00:12:35.286 "driver_specific": {} 00:12:35.286 } 00:12:35.286 ] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.286 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.551 [2024-11-04 11:45:00.802938] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.551 [2024-11-04 11:45:00.803111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.551 [2024-11-04 11:45:00.803181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.551 [2024-11-04 11:45:00.805858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.551 [2024-11-04 11:45:00.805968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.551 "name": "Existed_Raid", 00:12:35.551 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:35.551 "strip_size_kb": 0, 00:12:35.551 "state": "configuring", 00:12:35.551 "raid_level": "raid1", 00:12:35.551 "superblock": true, 00:12:35.551 "num_base_bdevs": 4, 00:12:35.551 "num_base_bdevs_discovered": 3, 00:12:35.551 "num_base_bdevs_operational": 4, 00:12:35.551 "base_bdevs_list": [ 00:12:35.551 { 00:12:35.551 "name": "BaseBdev1", 00:12:35.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.551 "is_configured": false, 00:12:35.551 "data_offset": 0, 00:12:35.551 "data_size": 0 00:12:35.551 }, 00:12:35.551 { 00:12:35.551 "name": "BaseBdev2", 00:12:35.551 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:35.551 "is_configured": true, 00:12:35.551 "data_offset": 2048, 00:12:35.551 "data_size": 63488 00:12:35.551 }, 00:12:35.551 { 00:12:35.551 "name": "BaseBdev3", 00:12:35.551 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:35.551 "is_configured": true, 00:12:35.551 "data_offset": 2048, 00:12:35.551 "data_size": 63488 00:12:35.551 }, 00:12:35.551 { 00:12:35.551 "name": "BaseBdev4", 00:12:35.551 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:35.551 "is_configured": true, 00:12:35.551 "data_offset": 2048, 00:12:35.551 "data_size": 63488 00:12:35.551 } 00:12:35.551 ] 00:12:35.551 }' 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.551 11:45:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.811 [2024-11-04 11:45:01.234204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.811 "name": "Existed_Raid", 00:12:35.811 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:35.811 "strip_size_kb": 0, 00:12:35.811 "state": "configuring", 00:12:35.811 "raid_level": "raid1", 00:12:35.811 "superblock": true, 00:12:35.811 "num_base_bdevs": 4, 00:12:35.811 "num_base_bdevs_discovered": 2, 00:12:35.811 "num_base_bdevs_operational": 4, 00:12:35.811 "base_bdevs_list": [ 00:12:35.811 { 00:12:35.811 "name": "BaseBdev1", 00:12:35.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.811 "is_configured": false, 00:12:35.811 "data_offset": 0, 00:12:35.811 "data_size": 0 00:12:35.811 }, 00:12:35.811 { 00:12:35.811 "name": null, 00:12:35.811 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:35.811 "is_configured": false, 00:12:35.811 "data_offset": 0, 00:12:35.811 "data_size": 63488 00:12:35.811 }, 00:12:35.811 { 00:12:35.811 "name": "BaseBdev3", 00:12:35.811 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:35.811 "is_configured": true, 00:12:35.811 "data_offset": 2048, 00:12:35.811 "data_size": 63488 00:12:35.811 }, 00:12:35.811 { 00:12:35.811 "name": "BaseBdev4", 00:12:35.811 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:35.811 "is_configured": true, 00:12:35.811 "data_offset": 2048, 00:12:35.811 "data_size": 63488 00:12:35.811 } 00:12:35.811 ] 00:12:35.811 }' 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.811 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.377 [2024-11-04 11:45:01.736889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.377 BaseBdev1 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.377 [ 00:12:36.377 { 00:12:36.377 "name": "BaseBdev1", 00:12:36.377 "aliases": [ 00:12:36.377 "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1" 00:12:36.377 ], 00:12:36.377 "product_name": "Malloc disk", 00:12:36.377 "block_size": 512, 00:12:36.377 "num_blocks": 65536, 00:12:36.377 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:36.377 "assigned_rate_limits": { 00:12:36.377 "rw_ios_per_sec": 0, 00:12:36.377 "rw_mbytes_per_sec": 0, 00:12:36.377 "r_mbytes_per_sec": 0, 00:12:36.377 "w_mbytes_per_sec": 0 00:12:36.377 }, 00:12:36.377 "claimed": true, 00:12:36.377 "claim_type": "exclusive_write", 00:12:36.377 "zoned": false, 00:12:36.377 "supported_io_types": { 00:12:36.377 "read": true, 00:12:36.377 "write": true, 00:12:36.377 "unmap": true, 00:12:36.377 "flush": true, 00:12:36.377 "reset": true, 00:12:36.377 "nvme_admin": false, 00:12:36.377 "nvme_io": false, 00:12:36.377 "nvme_io_md": false, 00:12:36.377 "write_zeroes": true, 00:12:36.377 "zcopy": true, 00:12:36.377 "get_zone_info": false, 00:12:36.377 "zone_management": false, 00:12:36.377 "zone_append": false, 00:12:36.377 "compare": false, 00:12:36.377 "compare_and_write": false, 00:12:36.377 "abort": true, 00:12:36.377 "seek_hole": false, 00:12:36.377 "seek_data": false, 00:12:36.377 "copy": true, 00:12:36.377 "nvme_iov_md": false 00:12:36.377 }, 00:12:36.377 "memory_domains": [ 00:12:36.377 { 00:12:36.377 "dma_device_id": "system", 00:12:36.377 "dma_device_type": 1 00:12:36.377 }, 00:12:36.377 { 00:12:36.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.377 "dma_device_type": 2 00:12:36.377 } 00:12:36.377 ], 00:12:36.377 "driver_specific": {} 00:12:36.377 } 00:12:36.377 ] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.377 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.378 "name": "Existed_Raid", 00:12:36.378 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:36.378 "strip_size_kb": 0, 00:12:36.378 "state": "configuring", 00:12:36.378 "raid_level": "raid1", 00:12:36.378 "superblock": true, 00:12:36.378 "num_base_bdevs": 4, 00:12:36.378 "num_base_bdevs_discovered": 3, 00:12:36.378 "num_base_bdevs_operational": 4, 00:12:36.378 "base_bdevs_list": [ 00:12:36.378 { 00:12:36.378 "name": "BaseBdev1", 00:12:36.378 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:36.378 "is_configured": true, 00:12:36.378 "data_offset": 2048, 00:12:36.378 "data_size": 63488 00:12:36.378 }, 00:12:36.378 { 00:12:36.378 "name": null, 00:12:36.378 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:36.378 "is_configured": false, 00:12:36.378 "data_offset": 0, 00:12:36.378 "data_size": 63488 00:12:36.378 }, 00:12:36.378 { 00:12:36.378 "name": "BaseBdev3", 00:12:36.378 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:36.378 "is_configured": true, 00:12:36.378 "data_offset": 2048, 00:12:36.378 "data_size": 63488 00:12:36.378 }, 00:12:36.378 { 00:12:36.378 "name": "BaseBdev4", 00:12:36.378 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:36.378 "is_configured": true, 00:12:36.378 "data_offset": 2048, 00:12:36.378 "data_size": 63488 00:12:36.378 } 00:12:36.378 ] 00:12:36.378 }' 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.378 11:45:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.945 [2024-11-04 11:45:02.228330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.945 "name": "Existed_Raid", 00:12:36.945 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:36.945 "strip_size_kb": 0, 00:12:36.945 "state": "configuring", 00:12:36.945 "raid_level": "raid1", 00:12:36.945 "superblock": true, 00:12:36.945 "num_base_bdevs": 4, 00:12:36.945 "num_base_bdevs_discovered": 2, 00:12:36.945 "num_base_bdevs_operational": 4, 00:12:36.945 "base_bdevs_list": [ 00:12:36.945 { 00:12:36.945 "name": "BaseBdev1", 00:12:36.945 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:36.945 "is_configured": true, 00:12:36.945 "data_offset": 2048, 00:12:36.945 "data_size": 63488 00:12:36.945 }, 00:12:36.945 { 00:12:36.945 "name": null, 00:12:36.945 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:36.945 "is_configured": false, 00:12:36.945 "data_offset": 0, 00:12:36.945 "data_size": 63488 00:12:36.945 }, 00:12:36.945 { 00:12:36.945 "name": null, 00:12:36.945 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:36.945 "is_configured": false, 00:12:36.945 "data_offset": 0, 00:12:36.945 "data_size": 63488 00:12:36.945 }, 00:12:36.945 { 00:12:36.945 "name": "BaseBdev4", 00:12:36.945 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:36.945 "is_configured": true, 00:12:36.945 "data_offset": 2048, 00:12:36.945 "data_size": 63488 00:12:36.945 } 00:12:36.945 ] 00:12:36.945 }' 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.945 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.204 [2024-11-04 11:45:02.711529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.204 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.462 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.462 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.462 "name": "Existed_Raid", 00:12:37.462 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:37.462 "strip_size_kb": 0, 00:12:37.462 "state": "configuring", 00:12:37.462 "raid_level": "raid1", 00:12:37.462 "superblock": true, 00:12:37.462 "num_base_bdevs": 4, 00:12:37.462 "num_base_bdevs_discovered": 3, 00:12:37.462 "num_base_bdevs_operational": 4, 00:12:37.462 "base_bdevs_list": [ 00:12:37.462 { 00:12:37.462 "name": "BaseBdev1", 00:12:37.462 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:37.462 "is_configured": true, 00:12:37.462 "data_offset": 2048, 00:12:37.462 "data_size": 63488 00:12:37.462 }, 00:12:37.462 { 00:12:37.462 "name": null, 00:12:37.462 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:37.462 "is_configured": false, 00:12:37.462 "data_offset": 0, 00:12:37.462 "data_size": 63488 00:12:37.462 }, 00:12:37.462 { 00:12:37.462 "name": "BaseBdev3", 00:12:37.462 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:37.462 "is_configured": true, 00:12:37.462 "data_offset": 2048, 00:12:37.462 "data_size": 63488 00:12:37.462 }, 00:12:37.462 { 00:12:37.462 "name": "BaseBdev4", 00:12:37.462 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:37.462 "is_configured": true, 00:12:37.462 "data_offset": 2048, 00:12:37.462 "data_size": 63488 00:12:37.462 } 00:12:37.462 ] 00:12:37.462 }' 00:12:37.462 11:45:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.462 11:45:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.721 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.721 [2024-11-04 11:45:03.150799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.979 "name": "Existed_Raid", 00:12:37.979 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:37.979 "strip_size_kb": 0, 00:12:37.979 "state": "configuring", 00:12:37.979 "raid_level": "raid1", 00:12:37.979 "superblock": true, 00:12:37.979 "num_base_bdevs": 4, 00:12:37.979 "num_base_bdevs_discovered": 2, 00:12:37.979 "num_base_bdevs_operational": 4, 00:12:37.979 "base_bdevs_list": [ 00:12:37.979 { 00:12:37.979 "name": null, 00:12:37.979 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:37.979 "is_configured": false, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 63488 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": null, 00:12:37.979 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:37.979 "is_configured": false, 00:12:37.979 "data_offset": 0, 00:12:37.979 "data_size": 63488 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": "BaseBdev3", 00:12:37.979 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:37.979 "is_configured": true, 00:12:37.979 "data_offset": 2048, 00:12:37.979 "data_size": 63488 00:12:37.979 }, 00:12:37.979 { 00:12:37.979 "name": "BaseBdev4", 00:12:37.979 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:37.979 "is_configured": true, 00:12:37.979 "data_offset": 2048, 00:12:37.979 "data_size": 63488 00:12:37.979 } 00:12:37.979 ] 00:12:37.979 }' 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.979 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 [2024-11-04 11:45:03.719644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.239 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.497 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.497 "name": "Existed_Raid", 00:12:38.497 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:38.497 "strip_size_kb": 0, 00:12:38.497 "state": "configuring", 00:12:38.497 "raid_level": "raid1", 00:12:38.497 "superblock": true, 00:12:38.497 "num_base_bdevs": 4, 00:12:38.497 "num_base_bdevs_discovered": 3, 00:12:38.497 "num_base_bdevs_operational": 4, 00:12:38.497 "base_bdevs_list": [ 00:12:38.497 { 00:12:38.497 "name": null, 00:12:38.497 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:38.497 "is_configured": false, 00:12:38.497 "data_offset": 0, 00:12:38.497 "data_size": 63488 00:12:38.497 }, 00:12:38.497 { 00:12:38.497 "name": "BaseBdev2", 00:12:38.497 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:38.497 "is_configured": true, 00:12:38.497 "data_offset": 2048, 00:12:38.497 "data_size": 63488 00:12:38.497 }, 00:12:38.497 { 00:12:38.497 "name": "BaseBdev3", 00:12:38.498 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:38.498 "is_configured": true, 00:12:38.498 "data_offset": 2048, 00:12:38.498 "data_size": 63488 00:12:38.498 }, 00:12:38.498 { 00:12:38.498 "name": "BaseBdev4", 00:12:38.498 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:38.498 "is_configured": true, 00:12:38.498 "data_offset": 2048, 00:12:38.498 "data_size": 63488 00:12:38.498 } 00:12:38.498 ] 00:12:38.498 }' 00:12:38.498 11:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.498 11:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.755 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.016 [2024-11-04 11:45:04.293167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:39.017 [2024-11-04 11:45:04.293444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.017 [2024-11-04 11:45:04.293464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.017 [2024-11-04 11:45:04.293754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:39.017 [2024-11-04 11:45:04.293985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.017 NewBaseBdev 00:12:39.017 [2024-11-04 11:45:04.294069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:39.017 [2024-11-04 11:45:04.294285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.017 [ 00:12:39.017 { 00:12:39.017 "name": "NewBaseBdev", 00:12:39.017 "aliases": [ 00:12:39.017 "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1" 00:12:39.017 ], 00:12:39.017 "product_name": "Malloc disk", 00:12:39.017 "block_size": 512, 00:12:39.017 "num_blocks": 65536, 00:12:39.017 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:39.017 "assigned_rate_limits": { 00:12:39.017 "rw_ios_per_sec": 0, 00:12:39.017 "rw_mbytes_per_sec": 0, 00:12:39.017 "r_mbytes_per_sec": 0, 00:12:39.017 "w_mbytes_per_sec": 0 00:12:39.017 }, 00:12:39.017 "claimed": true, 00:12:39.017 "claim_type": "exclusive_write", 00:12:39.017 "zoned": false, 00:12:39.017 "supported_io_types": { 00:12:39.017 "read": true, 00:12:39.017 "write": true, 00:12:39.017 "unmap": true, 00:12:39.017 "flush": true, 00:12:39.017 "reset": true, 00:12:39.017 "nvme_admin": false, 00:12:39.017 "nvme_io": false, 00:12:39.017 "nvme_io_md": false, 00:12:39.017 "write_zeroes": true, 00:12:39.017 "zcopy": true, 00:12:39.017 "get_zone_info": false, 00:12:39.017 "zone_management": false, 00:12:39.017 "zone_append": false, 00:12:39.017 "compare": false, 00:12:39.017 "compare_and_write": false, 00:12:39.017 "abort": true, 00:12:39.017 "seek_hole": false, 00:12:39.017 "seek_data": false, 00:12:39.017 "copy": true, 00:12:39.017 "nvme_iov_md": false 00:12:39.017 }, 00:12:39.017 "memory_domains": [ 00:12:39.017 { 00:12:39.017 "dma_device_id": "system", 00:12:39.017 "dma_device_type": 1 00:12:39.017 }, 00:12:39.017 { 00:12:39.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.017 "dma_device_type": 2 00:12:39.017 } 00:12:39.017 ], 00:12:39.017 "driver_specific": {} 00:12:39.017 } 00:12:39.017 ] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.017 "name": "Existed_Raid", 00:12:39.017 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:39.017 "strip_size_kb": 0, 00:12:39.017 "state": "online", 00:12:39.017 "raid_level": "raid1", 00:12:39.017 "superblock": true, 00:12:39.017 "num_base_bdevs": 4, 00:12:39.017 "num_base_bdevs_discovered": 4, 00:12:39.017 "num_base_bdevs_operational": 4, 00:12:39.017 "base_bdevs_list": [ 00:12:39.017 { 00:12:39.017 "name": "NewBaseBdev", 00:12:39.017 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:39.017 "is_configured": true, 00:12:39.017 "data_offset": 2048, 00:12:39.017 "data_size": 63488 00:12:39.017 }, 00:12:39.017 { 00:12:39.017 "name": "BaseBdev2", 00:12:39.017 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:39.017 "is_configured": true, 00:12:39.017 "data_offset": 2048, 00:12:39.017 "data_size": 63488 00:12:39.017 }, 00:12:39.017 { 00:12:39.017 "name": "BaseBdev3", 00:12:39.017 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:39.017 "is_configured": true, 00:12:39.017 "data_offset": 2048, 00:12:39.017 "data_size": 63488 00:12:39.017 }, 00:12:39.017 { 00:12:39.017 "name": "BaseBdev4", 00:12:39.017 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:39.017 "is_configured": true, 00:12:39.017 "data_offset": 2048, 00:12:39.017 "data_size": 63488 00:12:39.017 } 00:12:39.017 ] 00:12:39.017 }' 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.017 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.276 [2024-11-04 11:45:04.732911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.276 "name": "Existed_Raid", 00:12:39.276 "aliases": [ 00:12:39.276 "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc" 00:12:39.276 ], 00:12:39.276 "product_name": "Raid Volume", 00:12:39.276 "block_size": 512, 00:12:39.276 "num_blocks": 63488, 00:12:39.276 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:39.276 "assigned_rate_limits": { 00:12:39.276 "rw_ios_per_sec": 0, 00:12:39.276 "rw_mbytes_per_sec": 0, 00:12:39.276 "r_mbytes_per_sec": 0, 00:12:39.276 "w_mbytes_per_sec": 0 00:12:39.276 }, 00:12:39.276 "claimed": false, 00:12:39.276 "zoned": false, 00:12:39.276 "supported_io_types": { 00:12:39.276 "read": true, 00:12:39.276 "write": true, 00:12:39.276 "unmap": false, 00:12:39.276 "flush": false, 00:12:39.276 "reset": true, 00:12:39.276 "nvme_admin": false, 00:12:39.276 "nvme_io": false, 00:12:39.276 "nvme_io_md": false, 00:12:39.276 "write_zeroes": true, 00:12:39.276 "zcopy": false, 00:12:39.276 "get_zone_info": false, 00:12:39.276 "zone_management": false, 00:12:39.276 "zone_append": false, 00:12:39.276 "compare": false, 00:12:39.276 "compare_and_write": false, 00:12:39.276 "abort": false, 00:12:39.276 "seek_hole": false, 00:12:39.276 "seek_data": false, 00:12:39.276 "copy": false, 00:12:39.276 "nvme_iov_md": false 00:12:39.276 }, 00:12:39.276 "memory_domains": [ 00:12:39.276 { 00:12:39.276 "dma_device_id": "system", 00:12:39.276 "dma_device_type": 1 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.276 "dma_device_type": 2 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "system", 00:12:39.276 "dma_device_type": 1 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.276 "dma_device_type": 2 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "system", 00:12:39.276 "dma_device_type": 1 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.276 "dma_device_type": 2 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "system", 00:12:39.276 "dma_device_type": 1 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.276 "dma_device_type": 2 00:12:39.276 } 00:12:39.276 ], 00:12:39.276 "driver_specific": { 00:12:39.276 "raid": { 00:12:39.276 "uuid": "e3fe5ac9-7cd6-4b2d-9db3-88e0ce6740fc", 00:12:39.276 "strip_size_kb": 0, 00:12:39.276 "state": "online", 00:12:39.276 "raid_level": "raid1", 00:12:39.276 "superblock": true, 00:12:39.276 "num_base_bdevs": 4, 00:12:39.276 "num_base_bdevs_discovered": 4, 00:12:39.276 "num_base_bdevs_operational": 4, 00:12:39.276 "base_bdevs_list": [ 00:12:39.276 { 00:12:39.276 "name": "NewBaseBdev", 00:12:39.276 "uuid": "8d6d9cde-31f8-4c9f-b8c3-5545f9c858f1", 00:12:39.276 "is_configured": true, 00:12:39.276 "data_offset": 2048, 00:12:39.276 "data_size": 63488 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "name": "BaseBdev2", 00:12:39.276 "uuid": "eae7b633-962d-4469-aed1-fdd23697598b", 00:12:39.276 "is_configured": true, 00:12:39.276 "data_offset": 2048, 00:12:39.276 "data_size": 63488 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "name": "BaseBdev3", 00:12:39.276 "uuid": "66a9be6c-59f8-4f18-ba20-13d3bcdff3f2", 00:12:39.276 "is_configured": true, 00:12:39.276 "data_offset": 2048, 00:12:39.276 "data_size": 63488 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "name": "BaseBdev4", 00:12:39.276 "uuid": "953ad104-9277-47e7-8422-4923351b3048", 00:12:39.276 "is_configured": true, 00:12:39.276 "data_offset": 2048, 00:12:39.276 "data_size": 63488 00:12:39.276 } 00:12:39.276 ] 00:12:39.276 } 00:12:39.276 } 00:12:39.276 }' 00:12:39.276 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:39.535 BaseBdev2 00:12:39.535 BaseBdev3 00:12:39.535 BaseBdev4' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.535 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.536 11:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.536 [2024-11-04 11:45:05.036041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.536 [2024-11-04 11:45:05.036104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.536 [2024-11-04 11:45:05.036237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.536 [2024-11-04 11:45:05.036593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.536 [2024-11-04 11:45:05.036635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74093 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74093 ']' 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74093 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.536 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74093 00:12:39.794 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.794 killing process with pid 74093 00:12:39.794 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.794 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74093' 00:12:39.794 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74093 00:12:39.794 [2024-11-04 11:45:05.073360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.794 11:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74093 00:12:40.053 [2024-11-04 11:45:05.516764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.432 11:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:41.432 00:12:41.432 real 0m11.724s 00:12:41.432 user 0m18.421s 00:12:41.432 sys 0m1.984s 00:12:41.432 ************************************ 00:12:41.432 END TEST raid_state_function_test_sb 00:12:41.432 ************************************ 00:12:41.432 11:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:41.432 11:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.432 11:45:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:41.432 11:45:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:41.432 11:45:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:41.432 11:45:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.432 ************************************ 00:12:41.432 START TEST raid_superblock_test 00:12:41.432 ************************************ 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74758 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74758 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74758 ']' 00:12:41.432 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.433 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:41.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.433 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.433 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:41.433 11:45:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.433 [2024-11-04 11:45:06.919231] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:41.433 [2024-11-04 11:45:06.919347] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74758 ] 00:12:41.691 [2024-11-04 11:45:07.078320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.950 [2024-11-04 11:45:07.221785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.950 [2024-11-04 11:45:07.465375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.950 [2024-11-04 11:45:07.465470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.517 malloc1 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.517 [2024-11-04 11:45:07.827368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:42.517 [2024-11-04 11:45:07.827544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.517 [2024-11-04 11:45:07.827601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.517 [2024-11-04 11:45:07.827640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.517 [2024-11-04 11:45:07.830266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.517 [2024-11-04 11:45:07.830342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:42.517 pt1 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.517 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 malloc2 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-04 11:45:07.890339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:42.518 [2024-11-04 11:45:07.890511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.518 [2024-11-04 11:45:07.890593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.518 [2024-11-04 11:45:07.890634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.518 [2024-11-04 11:45:07.893341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.518 [2024-11-04 11:45:07.893441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:42.518 pt2 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 malloc3 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-04 11:45:07.968322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:42.518 [2024-11-04 11:45:07.968487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.518 [2024-11-04 11:45:07.968547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:42.518 [2024-11-04 11:45:07.968583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.518 [2024-11-04 11:45:07.971454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.518 [2024-11-04 11:45:07.971542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:42.518 pt3 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 malloc4 00:12:42.518 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.518 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:42.518 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.518 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-04 11:45:08.032458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:42.518 [2024-11-04 11:45:08.032531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.518 [2024-11-04 11:45:08.032554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:42.518 [2024-11-04 11:45:08.032565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.518 [2024-11-04 11:45:08.035057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.518 [2024-11-04 11:45:08.035155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:42.780 pt4 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.780 [2024-11-04 11:45:08.044450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:42.780 [2024-11-04 11:45:08.046597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:42.780 [2024-11-04 11:45:08.046701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:42.780 [2024-11-04 11:45:08.046783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:42.780 [2024-11-04 11:45:08.047021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.780 [2024-11-04 11:45:08.047070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.780 [2024-11-04 11:45:08.047409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:42.780 [2024-11-04 11:45:08.047629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.780 [2024-11-04 11:45:08.047679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.780 [2024-11-04 11:45:08.047941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.780 "name": "raid_bdev1", 00:12:42.780 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:42.780 "strip_size_kb": 0, 00:12:42.780 "state": "online", 00:12:42.780 "raid_level": "raid1", 00:12:42.780 "superblock": true, 00:12:42.780 "num_base_bdevs": 4, 00:12:42.780 "num_base_bdevs_discovered": 4, 00:12:42.780 "num_base_bdevs_operational": 4, 00:12:42.780 "base_bdevs_list": [ 00:12:42.780 { 00:12:42.780 "name": "pt1", 00:12:42.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.780 "is_configured": true, 00:12:42.780 "data_offset": 2048, 00:12:42.780 "data_size": 63488 00:12:42.780 }, 00:12:42.780 { 00:12:42.780 "name": "pt2", 00:12:42.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.780 "is_configured": true, 00:12:42.780 "data_offset": 2048, 00:12:42.780 "data_size": 63488 00:12:42.780 }, 00:12:42.780 { 00:12:42.780 "name": "pt3", 00:12:42.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.780 "is_configured": true, 00:12:42.780 "data_offset": 2048, 00:12:42.780 "data_size": 63488 00:12:42.780 }, 00:12:42.780 { 00:12:42.780 "name": "pt4", 00:12:42.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:42.780 "is_configured": true, 00:12:42.780 "data_offset": 2048, 00:12:42.780 "data_size": 63488 00:12:42.780 } 00:12:42.780 ] 00:12:42.780 }' 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.780 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.042 [2024-11-04 11:45:08.504202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:43.042 "name": "raid_bdev1", 00:12:43.042 "aliases": [ 00:12:43.042 "3ae622ea-98c9-43df-ad4b-6f6ba988ff34" 00:12:43.042 ], 00:12:43.042 "product_name": "Raid Volume", 00:12:43.042 "block_size": 512, 00:12:43.042 "num_blocks": 63488, 00:12:43.042 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:43.042 "assigned_rate_limits": { 00:12:43.042 "rw_ios_per_sec": 0, 00:12:43.042 "rw_mbytes_per_sec": 0, 00:12:43.042 "r_mbytes_per_sec": 0, 00:12:43.042 "w_mbytes_per_sec": 0 00:12:43.042 }, 00:12:43.042 "claimed": false, 00:12:43.042 "zoned": false, 00:12:43.042 "supported_io_types": { 00:12:43.042 "read": true, 00:12:43.042 "write": true, 00:12:43.042 "unmap": false, 00:12:43.042 "flush": false, 00:12:43.042 "reset": true, 00:12:43.042 "nvme_admin": false, 00:12:43.042 "nvme_io": false, 00:12:43.042 "nvme_io_md": false, 00:12:43.042 "write_zeroes": true, 00:12:43.042 "zcopy": false, 00:12:43.042 "get_zone_info": false, 00:12:43.042 "zone_management": false, 00:12:43.042 "zone_append": false, 00:12:43.042 "compare": false, 00:12:43.042 "compare_and_write": false, 00:12:43.042 "abort": false, 00:12:43.042 "seek_hole": false, 00:12:43.042 "seek_data": false, 00:12:43.042 "copy": false, 00:12:43.042 "nvme_iov_md": false 00:12:43.042 }, 00:12:43.042 "memory_domains": [ 00:12:43.042 { 00:12:43.042 "dma_device_id": "system", 00:12:43.042 "dma_device_type": 1 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.042 "dma_device_type": 2 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "system", 00:12:43.042 "dma_device_type": 1 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.042 "dma_device_type": 2 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "system", 00:12:43.042 "dma_device_type": 1 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.042 "dma_device_type": 2 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "system", 00:12:43.042 "dma_device_type": 1 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.042 "dma_device_type": 2 00:12:43.042 } 00:12:43.042 ], 00:12:43.042 "driver_specific": { 00:12:43.042 "raid": { 00:12:43.042 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:43.042 "strip_size_kb": 0, 00:12:43.042 "state": "online", 00:12:43.042 "raid_level": "raid1", 00:12:43.042 "superblock": true, 00:12:43.042 "num_base_bdevs": 4, 00:12:43.042 "num_base_bdevs_discovered": 4, 00:12:43.042 "num_base_bdevs_operational": 4, 00:12:43.042 "base_bdevs_list": [ 00:12:43.042 { 00:12:43.042 "name": "pt1", 00:12:43.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.042 "is_configured": true, 00:12:43.042 "data_offset": 2048, 00:12:43.042 "data_size": 63488 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "name": "pt2", 00:12:43.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.042 "is_configured": true, 00:12:43.042 "data_offset": 2048, 00:12:43.042 "data_size": 63488 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "name": "pt3", 00:12:43.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.042 "is_configured": true, 00:12:43.042 "data_offset": 2048, 00:12:43.042 "data_size": 63488 00:12:43.042 }, 00:12:43.042 { 00:12:43.042 "name": "pt4", 00:12:43.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.042 "is_configured": true, 00:12:43.042 "data_offset": 2048, 00:12:43.042 "data_size": 63488 00:12:43.042 } 00:12:43.042 ] 00:12:43.042 } 00:12:43.042 } 00:12:43.042 }' 00:12:43.042 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:43.301 pt2 00:12:43.301 pt3 00:12:43.301 pt4' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 [2024-11-04 11:45:08.819602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ae622ea-98c9-43df-ad4b-6f6ba988ff34 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ae622ea-98c9-43df-ad4b-6f6ba988ff34 ']' 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 [2024-11-04 11:45:08.863221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.560 [2024-11-04 11:45:08.863260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.560 [2024-11-04 11:45:08.863354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.560 [2024-11-04 11:45:08.863461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.560 [2024-11-04 11:45:08.863479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.560 11:45:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.560 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.561 [2024-11-04 11:45:09.051003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:43.561 [2024-11-04 11:45:09.053775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:43.561 [2024-11-04 11:45:09.053921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:43.561 [2024-11-04 11:45:09.053963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:43.561 [2024-11-04 11:45:09.054039] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:43.561 [2024-11-04 11:45:09.054107] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:43.561 [2024-11-04 11:45:09.054128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:43.561 [2024-11-04 11:45:09.054146] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:43.561 [2024-11-04 11:45:09.054161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.561 [2024-11-04 11:45:09.054173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:43.561 request: 00:12:43.561 { 00:12:43.561 "name": "raid_bdev1", 00:12:43.561 "raid_level": "raid1", 00:12:43.561 "base_bdevs": [ 00:12:43.561 "malloc1", 00:12:43.561 "malloc2", 00:12:43.561 "malloc3", 00:12:43.561 "malloc4" 00:12:43.561 ], 00:12:43.561 "superblock": false, 00:12:43.561 "method": "bdev_raid_create", 00:12:43.561 "req_id": 1 00:12:43.561 } 00:12:43.561 Got JSON-RPC error response 00:12:43.561 response: 00:12:43.561 { 00:12:43.561 "code": -17, 00:12:43.561 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:43.561 } 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.561 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.820 [2024-11-04 11:45:09.114893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:43.820 [2024-11-04 11:45:09.115058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.820 [2024-11-04 11:45:09.115103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:43.820 [2024-11-04 11:45:09.115146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.820 [2024-11-04 11:45:09.118079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.820 [2024-11-04 11:45:09.118179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:43.820 [2024-11-04 11:45:09.118315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:43.820 [2024-11-04 11:45:09.118443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:43.820 pt1 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.820 "name": "raid_bdev1", 00:12:43.820 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:43.820 "strip_size_kb": 0, 00:12:43.820 "state": "configuring", 00:12:43.820 "raid_level": "raid1", 00:12:43.820 "superblock": true, 00:12:43.820 "num_base_bdevs": 4, 00:12:43.820 "num_base_bdevs_discovered": 1, 00:12:43.820 "num_base_bdevs_operational": 4, 00:12:43.820 "base_bdevs_list": [ 00:12:43.820 { 00:12:43.820 "name": "pt1", 00:12:43.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:43.820 "is_configured": true, 00:12:43.820 "data_offset": 2048, 00:12:43.820 "data_size": 63488 00:12:43.820 }, 00:12:43.820 { 00:12:43.820 "name": null, 00:12:43.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.820 "is_configured": false, 00:12:43.820 "data_offset": 2048, 00:12:43.820 "data_size": 63488 00:12:43.820 }, 00:12:43.820 { 00:12:43.820 "name": null, 00:12:43.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:43.820 "is_configured": false, 00:12:43.820 "data_offset": 2048, 00:12:43.820 "data_size": 63488 00:12:43.820 }, 00:12:43.820 { 00:12:43.820 "name": null, 00:12:43.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:43.820 "is_configured": false, 00:12:43.820 "data_offset": 2048, 00:12:43.820 "data_size": 63488 00:12:43.820 } 00:12:43.820 ] 00:12:43.820 }' 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.820 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.079 [2024-11-04 11:45:09.514271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:44.079 [2024-11-04 11:45:09.514463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.079 [2024-11-04 11:45:09.514513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:44.079 [2024-11-04 11:45:09.514584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.079 [2024-11-04 11:45:09.515199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.079 [2024-11-04 11:45:09.515279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:44.079 [2024-11-04 11:45:09.515424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:44.079 [2024-11-04 11:45:09.515472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:44.079 pt2 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.079 [2024-11-04 11:45:09.522196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.079 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.079 "name": "raid_bdev1", 00:12:44.079 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:44.079 "strip_size_kb": 0, 00:12:44.079 "state": "configuring", 00:12:44.079 "raid_level": "raid1", 00:12:44.079 "superblock": true, 00:12:44.079 "num_base_bdevs": 4, 00:12:44.079 "num_base_bdevs_discovered": 1, 00:12:44.080 "num_base_bdevs_operational": 4, 00:12:44.080 "base_bdevs_list": [ 00:12:44.080 { 00:12:44.080 "name": "pt1", 00:12:44.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.080 "is_configured": true, 00:12:44.080 "data_offset": 2048, 00:12:44.080 "data_size": 63488 00:12:44.080 }, 00:12:44.080 { 00:12:44.080 "name": null, 00:12:44.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.080 "is_configured": false, 00:12:44.080 "data_offset": 0, 00:12:44.080 "data_size": 63488 00:12:44.080 }, 00:12:44.080 { 00:12:44.080 "name": null, 00:12:44.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.080 "is_configured": false, 00:12:44.080 "data_offset": 2048, 00:12:44.080 "data_size": 63488 00:12:44.080 }, 00:12:44.080 { 00:12:44.080 "name": null, 00:12:44.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.080 "is_configured": false, 00:12:44.080 "data_offset": 2048, 00:12:44.080 "data_size": 63488 00:12:44.080 } 00:12:44.080 ] 00:12:44.080 }' 00:12:44.080 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.080 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 [2024-11-04 11:45:09.977474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:44.646 [2024-11-04 11:45:09.977656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.646 [2024-11-04 11:45:09.977693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:44.646 [2024-11-04 11:45:09.977706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.646 [2024-11-04 11:45:09.978277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.646 [2024-11-04 11:45:09.978305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:44.646 [2024-11-04 11:45:09.978426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:44.646 [2024-11-04 11:45:09.978453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:44.646 pt2 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 [2024-11-04 11:45:09.985376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:44.646 [2024-11-04 11:45:09.985485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.646 [2024-11-04 11:45:09.985509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:44.646 [2024-11-04 11:45:09.985518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.646 [2024-11-04 11:45:09.985941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.646 [2024-11-04 11:45:09.985964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:44.646 [2024-11-04 11:45:09.986039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:44.646 [2024-11-04 11:45:09.986058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:44.646 pt3 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 [2024-11-04 11:45:09.993332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:44.646 [2024-11-04 11:45:09.993430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.646 [2024-11-04 11:45:09.993455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:44.646 [2024-11-04 11:45:09.993463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.646 [2024-11-04 11:45:09.993889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.646 [2024-11-04 11:45:09.993912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:44.646 [2024-11-04 11:45:09.993980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:44.646 [2024-11-04 11:45:09.993997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:44.646 [2024-11-04 11:45:09.994172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.646 [2024-11-04 11:45:09.994188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.646 [2024-11-04 11:45:09.994465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.646 [2024-11-04 11:45:09.994628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.646 [2024-11-04 11:45:09.994641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:44.646 [2024-11-04 11:45:09.994801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.646 pt4 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.646 11:45:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.646 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.646 "name": "raid_bdev1", 00:12:44.646 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:44.646 "strip_size_kb": 0, 00:12:44.646 "state": "online", 00:12:44.646 "raid_level": "raid1", 00:12:44.646 "superblock": true, 00:12:44.646 "num_base_bdevs": 4, 00:12:44.646 "num_base_bdevs_discovered": 4, 00:12:44.646 "num_base_bdevs_operational": 4, 00:12:44.646 "base_bdevs_list": [ 00:12:44.646 { 00:12:44.646 "name": "pt1", 00:12:44.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:44.646 "is_configured": true, 00:12:44.646 "data_offset": 2048, 00:12:44.646 "data_size": 63488 00:12:44.646 }, 00:12:44.646 { 00:12:44.646 "name": "pt2", 00:12:44.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:44.646 "is_configured": true, 00:12:44.646 "data_offset": 2048, 00:12:44.646 "data_size": 63488 00:12:44.646 }, 00:12:44.646 { 00:12:44.646 "name": "pt3", 00:12:44.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.647 "is_configured": true, 00:12:44.647 "data_offset": 2048, 00:12:44.647 "data_size": 63488 00:12:44.647 }, 00:12:44.647 { 00:12:44.647 "name": "pt4", 00:12:44.647 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:44.647 "is_configured": true, 00:12:44.647 "data_offset": 2048, 00:12:44.647 "data_size": 63488 00:12:44.647 } 00:12:44.647 ] 00:12:44.647 }' 00:12:44.647 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.647 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.214 [2024-11-04 11:45:10.449146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.214 "name": "raid_bdev1", 00:12:45.214 "aliases": [ 00:12:45.214 "3ae622ea-98c9-43df-ad4b-6f6ba988ff34" 00:12:45.214 ], 00:12:45.214 "product_name": "Raid Volume", 00:12:45.214 "block_size": 512, 00:12:45.214 "num_blocks": 63488, 00:12:45.214 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:45.214 "assigned_rate_limits": { 00:12:45.214 "rw_ios_per_sec": 0, 00:12:45.214 "rw_mbytes_per_sec": 0, 00:12:45.214 "r_mbytes_per_sec": 0, 00:12:45.214 "w_mbytes_per_sec": 0 00:12:45.214 }, 00:12:45.214 "claimed": false, 00:12:45.214 "zoned": false, 00:12:45.214 "supported_io_types": { 00:12:45.214 "read": true, 00:12:45.214 "write": true, 00:12:45.214 "unmap": false, 00:12:45.214 "flush": false, 00:12:45.214 "reset": true, 00:12:45.214 "nvme_admin": false, 00:12:45.214 "nvme_io": false, 00:12:45.214 "nvme_io_md": false, 00:12:45.214 "write_zeroes": true, 00:12:45.214 "zcopy": false, 00:12:45.214 "get_zone_info": false, 00:12:45.214 "zone_management": false, 00:12:45.214 "zone_append": false, 00:12:45.214 "compare": false, 00:12:45.214 "compare_and_write": false, 00:12:45.214 "abort": false, 00:12:45.214 "seek_hole": false, 00:12:45.214 "seek_data": false, 00:12:45.214 "copy": false, 00:12:45.214 "nvme_iov_md": false 00:12:45.214 }, 00:12:45.214 "memory_domains": [ 00:12:45.214 { 00:12:45.214 "dma_device_id": "system", 00:12:45.214 "dma_device_type": 1 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.214 "dma_device_type": 2 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "system", 00:12:45.214 "dma_device_type": 1 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.214 "dma_device_type": 2 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "system", 00:12:45.214 "dma_device_type": 1 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.214 "dma_device_type": 2 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "system", 00:12:45.214 "dma_device_type": 1 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.214 "dma_device_type": 2 00:12:45.214 } 00:12:45.214 ], 00:12:45.214 "driver_specific": { 00:12:45.214 "raid": { 00:12:45.214 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:45.214 "strip_size_kb": 0, 00:12:45.214 "state": "online", 00:12:45.214 "raid_level": "raid1", 00:12:45.214 "superblock": true, 00:12:45.214 "num_base_bdevs": 4, 00:12:45.214 "num_base_bdevs_discovered": 4, 00:12:45.214 "num_base_bdevs_operational": 4, 00:12:45.214 "base_bdevs_list": [ 00:12:45.214 { 00:12:45.214 "name": "pt1", 00:12:45.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:45.214 "is_configured": true, 00:12:45.214 "data_offset": 2048, 00:12:45.214 "data_size": 63488 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "name": "pt2", 00:12:45.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.214 "is_configured": true, 00:12:45.214 "data_offset": 2048, 00:12:45.214 "data_size": 63488 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "name": "pt3", 00:12:45.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.214 "is_configured": true, 00:12:45.214 "data_offset": 2048, 00:12:45.214 "data_size": 63488 00:12:45.214 }, 00:12:45.214 { 00:12:45.214 "name": "pt4", 00:12:45.214 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.214 "is_configured": true, 00:12:45.214 "data_offset": 2048, 00:12:45.214 "data_size": 63488 00:12:45.214 } 00:12:45.214 ] 00:12:45.214 } 00:12:45.214 } 00:12:45.214 }' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:45.214 pt2 00:12:45.214 pt3 00:12:45.214 pt4' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:45.214 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.215 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.215 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.215 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 [2024-11-04 11:45:10.756529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ae622ea-98c9-43df-ad4b-6f6ba988ff34 '!=' 3ae622ea-98c9-43df-ad4b-6f6ba988ff34 ']' 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 [2024-11-04 11:45:10.796292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.474 "name": "raid_bdev1", 00:12:45.474 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:45.474 "strip_size_kb": 0, 00:12:45.474 "state": "online", 00:12:45.474 "raid_level": "raid1", 00:12:45.474 "superblock": true, 00:12:45.474 "num_base_bdevs": 4, 00:12:45.474 "num_base_bdevs_discovered": 3, 00:12:45.474 "num_base_bdevs_operational": 3, 00:12:45.474 "base_bdevs_list": [ 00:12:45.474 { 00:12:45.474 "name": null, 00:12:45.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.474 "is_configured": false, 00:12:45.474 "data_offset": 0, 00:12:45.474 "data_size": 63488 00:12:45.474 }, 00:12:45.474 { 00:12:45.474 "name": "pt2", 00:12:45.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.474 "is_configured": true, 00:12:45.474 "data_offset": 2048, 00:12:45.474 "data_size": 63488 00:12:45.474 }, 00:12:45.474 { 00:12:45.474 "name": "pt3", 00:12:45.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.474 "is_configured": true, 00:12:45.474 "data_offset": 2048, 00:12:45.474 "data_size": 63488 00:12:45.474 }, 00:12:45.474 { 00:12:45.474 "name": "pt4", 00:12:45.474 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.474 "is_configured": true, 00:12:45.474 "data_offset": 2048, 00:12:45.474 "data_size": 63488 00:12:45.474 } 00:12:45.474 ] 00:12:45.474 }' 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.474 11:45:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 [2024-11-04 11:45:11.219498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.733 [2024-11-04 11:45:11.219570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.733 [2024-11-04 11:45:11.219689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.733 [2024-11-04 11:45:11.219787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.733 [2024-11-04 11:45:11.219798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:45.733 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 [2024-11-04 11:45:11.315233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:45.993 [2024-11-04 11:45:11.315366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.993 [2024-11-04 11:45:11.315413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:45.993 [2024-11-04 11:45:11.315444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.993 [2024-11-04 11:45:11.318091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.993 [2024-11-04 11:45:11.318175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:45.993 [2024-11-04 11:45:11.318309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:45.993 [2024-11-04 11:45:11.318388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:45.993 pt2 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.993 "name": "raid_bdev1", 00:12:45.993 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:45.993 "strip_size_kb": 0, 00:12:45.993 "state": "configuring", 00:12:45.993 "raid_level": "raid1", 00:12:45.993 "superblock": true, 00:12:45.993 "num_base_bdevs": 4, 00:12:45.993 "num_base_bdevs_discovered": 1, 00:12:45.993 "num_base_bdevs_operational": 3, 00:12:45.993 "base_bdevs_list": [ 00:12:45.993 { 00:12:45.993 "name": null, 00:12:45.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.993 "is_configured": false, 00:12:45.993 "data_offset": 2048, 00:12:45.993 "data_size": 63488 00:12:45.993 }, 00:12:45.993 { 00:12:45.993 "name": "pt2", 00:12:45.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:45.993 "is_configured": true, 00:12:45.993 "data_offset": 2048, 00:12:45.993 "data_size": 63488 00:12:45.993 }, 00:12:45.993 { 00:12:45.993 "name": null, 00:12:45.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:45.993 "is_configured": false, 00:12:45.993 "data_offset": 2048, 00:12:45.993 "data_size": 63488 00:12:45.993 }, 00:12:45.993 { 00:12:45.993 "name": null, 00:12:45.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:45.993 "is_configured": false, 00:12:45.993 "data_offset": 2048, 00:12:45.993 "data_size": 63488 00:12:45.993 } 00:12:45.993 ] 00:12:45.993 }' 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.993 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.560 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:46.560 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.561 [2024-11-04 11:45:11.782605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:46.561 [2024-11-04 11:45:11.782699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.561 [2024-11-04 11:45:11.782728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:46.561 [2024-11-04 11:45:11.782739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.561 [2024-11-04 11:45:11.783273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.561 [2024-11-04 11:45:11.783298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:46.561 [2024-11-04 11:45:11.783417] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:46.561 [2024-11-04 11:45:11.783444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:46.561 pt3 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.561 "name": "raid_bdev1", 00:12:46.561 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:46.561 "strip_size_kb": 0, 00:12:46.561 "state": "configuring", 00:12:46.561 "raid_level": "raid1", 00:12:46.561 "superblock": true, 00:12:46.561 "num_base_bdevs": 4, 00:12:46.561 "num_base_bdevs_discovered": 2, 00:12:46.561 "num_base_bdevs_operational": 3, 00:12:46.561 "base_bdevs_list": [ 00:12:46.561 { 00:12:46.561 "name": null, 00:12:46.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.561 "is_configured": false, 00:12:46.561 "data_offset": 2048, 00:12:46.561 "data_size": 63488 00:12:46.561 }, 00:12:46.561 { 00:12:46.561 "name": "pt2", 00:12:46.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.561 "is_configured": true, 00:12:46.561 "data_offset": 2048, 00:12:46.561 "data_size": 63488 00:12:46.561 }, 00:12:46.561 { 00:12:46.561 "name": "pt3", 00:12:46.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.561 "is_configured": true, 00:12:46.561 "data_offset": 2048, 00:12:46.561 "data_size": 63488 00:12:46.561 }, 00:12:46.561 { 00:12:46.561 "name": null, 00:12:46.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.561 "is_configured": false, 00:12:46.561 "data_offset": 2048, 00:12:46.561 "data_size": 63488 00:12:46.561 } 00:12:46.561 ] 00:12:46.561 }' 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.561 11:45:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.819 [2024-11-04 11:45:12.229849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:46.819 [2024-11-04 11:45:12.230036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.819 [2024-11-04 11:45:12.230086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:46.819 [2024-11-04 11:45:12.230120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.819 [2024-11-04 11:45:12.230767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.819 [2024-11-04 11:45:12.230832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:46.819 [2024-11-04 11:45:12.230980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:46.819 [2024-11-04 11:45:12.231045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:46.819 [2024-11-04 11:45:12.231264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:46.819 [2024-11-04 11:45:12.231304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.819 [2024-11-04 11:45:12.231669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:46.819 [2024-11-04 11:45:12.231890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:46.819 [2024-11-04 11:45:12.231938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:46.819 [2024-11-04 11:45:12.232159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.819 pt4 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.819 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.819 "name": "raid_bdev1", 00:12:46.819 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:46.819 "strip_size_kb": 0, 00:12:46.819 "state": "online", 00:12:46.819 "raid_level": "raid1", 00:12:46.819 "superblock": true, 00:12:46.819 "num_base_bdevs": 4, 00:12:46.819 "num_base_bdevs_discovered": 3, 00:12:46.819 "num_base_bdevs_operational": 3, 00:12:46.819 "base_bdevs_list": [ 00:12:46.819 { 00:12:46.819 "name": null, 00:12:46.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.819 "is_configured": false, 00:12:46.819 "data_offset": 2048, 00:12:46.819 "data_size": 63488 00:12:46.819 }, 00:12:46.819 { 00:12:46.819 "name": "pt2", 00:12:46.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:46.819 "is_configured": true, 00:12:46.819 "data_offset": 2048, 00:12:46.819 "data_size": 63488 00:12:46.819 }, 00:12:46.819 { 00:12:46.819 "name": "pt3", 00:12:46.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:46.819 "is_configured": true, 00:12:46.819 "data_offset": 2048, 00:12:46.819 "data_size": 63488 00:12:46.819 }, 00:12:46.819 { 00:12:46.819 "name": "pt4", 00:12:46.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:46.819 "is_configured": true, 00:12:46.819 "data_offset": 2048, 00:12:46.819 "data_size": 63488 00:12:46.819 } 00:12:46.819 ] 00:12:46.819 }' 00:12:46.820 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.820 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.387 [2024-11-04 11:45:12.665057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.387 [2024-11-04 11:45:12.665104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.387 [2024-11-04 11:45:12.665211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.387 [2024-11-04 11:45:12.665301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.387 [2024-11-04 11:45:12.665328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:47.387 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.388 [2024-11-04 11:45:12.740871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:47.388 [2024-11-04 11:45:12.740948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.388 [2024-11-04 11:45:12.740970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:47.388 [2024-11-04 11:45:12.740982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.388 [2024-11-04 11:45:12.743623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.388 [2024-11-04 11:45:12.743724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:47.388 [2024-11-04 11:45:12.743819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:47.388 [2024-11-04 11:45:12.743879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.388 [2024-11-04 11:45:12.744029] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:47.388 [2024-11-04 11:45:12.744043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:47.388 [2024-11-04 11:45:12.744060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:47.388 [2024-11-04 11:45:12.744154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.388 [2024-11-04 11:45:12.744271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.388 pt1 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.388 "name": "raid_bdev1", 00:12:47.388 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:47.388 "strip_size_kb": 0, 00:12:47.388 "state": "configuring", 00:12:47.388 "raid_level": "raid1", 00:12:47.388 "superblock": true, 00:12:47.388 "num_base_bdevs": 4, 00:12:47.388 "num_base_bdevs_discovered": 2, 00:12:47.388 "num_base_bdevs_operational": 3, 00:12:47.388 "base_bdevs_list": [ 00:12:47.388 { 00:12:47.388 "name": null, 00:12:47.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.388 "is_configured": false, 00:12:47.388 "data_offset": 2048, 00:12:47.388 "data_size": 63488 00:12:47.388 }, 00:12:47.388 { 00:12:47.388 "name": "pt2", 00:12:47.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.388 "is_configured": true, 00:12:47.388 "data_offset": 2048, 00:12:47.388 "data_size": 63488 00:12:47.388 }, 00:12:47.388 { 00:12:47.388 "name": "pt3", 00:12:47.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.388 "is_configured": true, 00:12:47.388 "data_offset": 2048, 00:12:47.388 "data_size": 63488 00:12:47.388 }, 00:12:47.388 { 00:12:47.388 "name": null, 00:12:47.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.388 "is_configured": false, 00:12:47.388 "data_offset": 2048, 00:12:47.388 "data_size": 63488 00:12:47.388 } 00:12:47.388 ] 00:12:47.388 }' 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.388 11:45:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 [2024-11-04 11:45:13.244153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:47.961 [2024-11-04 11:45:13.244347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.961 [2024-11-04 11:45:13.244411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:47.961 [2024-11-04 11:45:13.244501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.961 [2024-11-04 11:45:13.245167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.961 [2024-11-04 11:45:13.245245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:47.961 [2024-11-04 11:45:13.245424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:47.961 [2024-11-04 11:45:13.245507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:47.961 [2024-11-04 11:45:13.245737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:47.961 [2024-11-04 11:45:13.245781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.961 [2024-11-04 11:45:13.246152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:47.961 [2024-11-04 11:45:13.246382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:47.961 [2024-11-04 11:45:13.246452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:47.961 [2024-11-04 11:45:13.246701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.961 pt4 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.961 "name": "raid_bdev1", 00:12:47.961 "uuid": "3ae622ea-98c9-43df-ad4b-6f6ba988ff34", 00:12:47.961 "strip_size_kb": 0, 00:12:47.961 "state": "online", 00:12:47.961 "raid_level": "raid1", 00:12:47.961 "superblock": true, 00:12:47.961 "num_base_bdevs": 4, 00:12:47.961 "num_base_bdevs_discovered": 3, 00:12:47.961 "num_base_bdevs_operational": 3, 00:12:47.961 "base_bdevs_list": [ 00:12:47.961 { 00:12:47.961 "name": null, 00:12:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.961 "is_configured": false, 00:12:47.961 "data_offset": 2048, 00:12:47.961 "data_size": 63488 00:12:47.961 }, 00:12:47.961 { 00:12:47.961 "name": "pt2", 00:12:47.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.961 "is_configured": true, 00:12:47.961 "data_offset": 2048, 00:12:47.961 "data_size": 63488 00:12:47.961 }, 00:12:47.961 { 00:12:47.961 "name": "pt3", 00:12:47.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.961 "is_configured": true, 00:12:47.961 "data_offset": 2048, 00:12:47.961 "data_size": 63488 00:12:47.961 }, 00:12:47.961 { 00:12:47.961 "name": "pt4", 00:12:47.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.961 "is_configured": true, 00:12:47.961 "data_offset": 2048, 00:12:47.961 "data_size": 63488 00:12:47.961 } 00:12:47.961 ] 00:12:47.961 }' 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.961 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.220 [2024-11-04 11:45:13.711688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.220 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3ae622ea-98c9-43df-ad4b-6f6ba988ff34 '!=' 3ae622ea-98c9-43df-ad4b-6f6ba988ff34 ']' 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74758 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74758 ']' 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74758 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74758 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74758' 00:12:48.479 killing process with pid 74758 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74758 00:12:48.479 11:45:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74758 00:12:48.479 [2024-11-04 11:45:13.787797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.479 [2024-11-04 11:45:13.787957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.479 [2024-11-04 11:45:13.788148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.479 [2024-11-04 11:45:13.788214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:48.738 [2024-11-04 11:45:14.256441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.114 11:45:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:50.114 00:12:50.114 real 0m8.696s 00:12:50.114 user 0m13.401s 00:12:50.114 sys 0m1.650s 00:12:50.114 11:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.114 ************************************ 00:12:50.114 END TEST raid_superblock_test 00:12:50.114 ************************************ 00:12:50.114 11:45:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.114 11:45:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:50.114 11:45:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:50.114 11:45:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:50.114 11:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.114 ************************************ 00:12:50.114 START TEST raid_read_error_test 00:12:50.114 ************************************ 00:12:50.114 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:12:50.114 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2qhktR8O2l 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75252 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75252 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75252 ']' 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:50.115 11:45:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.374 [2024-11-04 11:45:15.694787] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:50.374 [2024-11-04 11:45:15.694985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75252 ] 00:12:50.374 [2024-11-04 11:45:15.866728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.633 [2024-11-04 11:45:15.986012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.893 [2024-11-04 11:45:16.193535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.893 [2024-11-04 11:45:16.193677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.151 BaseBdev1_malloc 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.151 true 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.151 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.151 [2024-11-04 11:45:16.608092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:51.151 [2024-11-04 11:45:16.608200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.151 [2024-11-04 11:45:16.608226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:51.151 [2024-11-04 11:45:16.608237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.151 [2024-11-04 11:45:16.610553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.152 [2024-11-04 11:45:16.610598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.152 BaseBdev1 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.152 BaseBdev2_malloc 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.152 true 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.152 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 [2024-11-04 11:45:16.677117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:51.411 [2024-11-04 11:45:16.677172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.411 [2024-11-04 11:45:16.677189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:51.411 [2024-11-04 11:45:16.677199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.411 [2024-11-04 11:45:16.679243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.411 [2024-11-04 11:45:16.679323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.411 BaseBdev2 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 BaseBdev3_malloc 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 true 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.411 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.411 [2024-11-04 11:45:16.753427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:51.411 [2024-11-04 11:45:16.753475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.411 [2024-11-04 11:45:16.753492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:51.411 [2024-11-04 11:45:16.753503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.412 [2024-11-04 11:45:16.755544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.412 [2024-11-04 11:45:16.755583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:51.412 BaseBdev3 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.412 BaseBdev4_malloc 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.412 true 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.412 [2024-11-04 11:45:16.821298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:51.412 [2024-11-04 11:45:16.821365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.412 [2024-11-04 11:45:16.821383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:51.412 [2024-11-04 11:45:16.821408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.412 [2024-11-04 11:45:16.823560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.412 [2024-11-04 11:45:16.823599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:51.412 BaseBdev4 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.412 [2024-11-04 11:45:16.833342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.412 [2024-11-04 11:45:16.835295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.412 [2024-11-04 11:45:16.835429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.412 [2024-11-04 11:45:16.835504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.412 [2024-11-04 11:45:16.835752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:51.412 [2024-11-04 11:45:16.835766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.412 [2024-11-04 11:45:16.836003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:51.412 [2024-11-04 11:45:16.836173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:51.412 [2024-11-04 11:45:16.836183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:51.412 [2024-11-04 11:45:16.836336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.412 "name": "raid_bdev1", 00:12:51.412 "uuid": "47ef65fe-de0b-427b-89f2-5eba543f4acd", 00:12:51.412 "strip_size_kb": 0, 00:12:51.412 "state": "online", 00:12:51.412 "raid_level": "raid1", 00:12:51.412 "superblock": true, 00:12:51.412 "num_base_bdevs": 4, 00:12:51.412 "num_base_bdevs_discovered": 4, 00:12:51.412 "num_base_bdevs_operational": 4, 00:12:51.412 "base_bdevs_list": [ 00:12:51.412 { 00:12:51.412 "name": "BaseBdev1", 00:12:51.412 "uuid": "dc19d326-a95b-5cb0-94f7-cf5b719e0f15", 00:12:51.412 "is_configured": true, 00:12:51.412 "data_offset": 2048, 00:12:51.412 "data_size": 63488 00:12:51.412 }, 00:12:51.412 { 00:12:51.412 "name": "BaseBdev2", 00:12:51.412 "uuid": "ea1cfcec-bb5d-5f19-8de3-8cec084dbfb4", 00:12:51.412 "is_configured": true, 00:12:51.412 "data_offset": 2048, 00:12:51.412 "data_size": 63488 00:12:51.412 }, 00:12:51.412 { 00:12:51.412 "name": "BaseBdev3", 00:12:51.412 "uuid": "6a87e7f8-4f6f-53a4-a5ee-31099a0e7e64", 00:12:51.412 "is_configured": true, 00:12:51.412 "data_offset": 2048, 00:12:51.412 "data_size": 63488 00:12:51.412 }, 00:12:51.412 { 00:12:51.412 "name": "BaseBdev4", 00:12:51.412 "uuid": "5b235371-035e-51b9-95a3-9187cf43e74c", 00:12:51.412 "is_configured": true, 00:12:51.412 "data_offset": 2048, 00:12:51.412 "data_size": 63488 00:12:51.412 } 00:12:51.412 ] 00:12:51.412 }' 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.412 11:45:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.980 11:45:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:51.980 11:45:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:51.980 [2024-11-04 11:45:17.385626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.915 "name": "raid_bdev1", 00:12:52.915 "uuid": "47ef65fe-de0b-427b-89f2-5eba543f4acd", 00:12:52.915 "strip_size_kb": 0, 00:12:52.915 "state": "online", 00:12:52.915 "raid_level": "raid1", 00:12:52.915 "superblock": true, 00:12:52.915 "num_base_bdevs": 4, 00:12:52.915 "num_base_bdevs_discovered": 4, 00:12:52.915 "num_base_bdevs_operational": 4, 00:12:52.915 "base_bdevs_list": [ 00:12:52.915 { 00:12:52.915 "name": "BaseBdev1", 00:12:52.915 "uuid": "dc19d326-a95b-5cb0-94f7-cf5b719e0f15", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 2048, 00:12:52.915 "data_size": 63488 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev2", 00:12:52.915 "uuid": "ea1cfcec-bb5d-5f19-8de3-8cec084dbfb4", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 2048, 00:12:52.915 "data_size": 63488 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev3", 00:12:52.915 "uuid": "6a87e7f8-4f6f-53a4-a5ee-31099a0e7e64", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 2048, 00:12:52.915 "data_size": 63488 00:12:52.915 }, 00:12:52.915 { 00:12:52.915 "name": "BaseBdev4", 00:12:52.915 "uuid": "5b235371-035e-51b9-95a3-9187cf43e74c", 00:12:52.915 "is_configured": true, 00:12:52.915 "data_offset": 2048, 00:12:52.915 "data_size": 63488 00:12:52.915 } 00:12:52.915 ] 00:12:52.915 }' 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.915 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:53.482 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.482 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.482 [2024-11-04 11:45:18.789292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:53.482 [2024-11-04 11:45:18.789384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.482 [2024-11-04 11:45:18.792216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.482 [2024-11-04 11:45:18.792273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.482 [2024-11-04 11:45:18.792394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.482 [2024-11-04 11:45:18.792407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:53.482 { 00:12:53.482 "results": [ 00:12:53.482 { 00:12:53.482 "job": "raid_bdev1", 00:12:53.482 "core_mask": "0x1", 00:12:53.482 "workload": "randrw", 00:12:53.482 "percentage": 50, 00:12:53.482 "status": "finished", 00:12:53.482 "queue_depth": 1, 00:12:53.482 "io_size": 131072, 00:12:53.483 "runtime": 1.404726, 00:12:53.483 "iops": 10165.683556793281, 00:12:53.483 "mibps": 1270.7104445991602, 00:12:53.483 "io_failed": 0, 00:12:53.483 "io_timeout": 0, 00:12:53.483 "avg_latency_us": 95.53968661700488, 00:12:53.483 "min_latency_us": 23.923144104803495, 00:12:53.483 "max_latency_us": 1674.172925764192 00:12:53.483 } 00:12:53.483 ], 00:12:53.483 "core_count": 1 00:12:53.483 } 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75252 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75252 ']' 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75252 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75252 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75252' 00:12:53.483 killing process with pid 75252 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75252 00:12:53.483 [2024-11-04 11:45:18.837373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.483 11:45:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75252 00:12:53.741 [2024-11-04 11:45:19.177789] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2qhktR8O2l 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:55.117 11:45:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:55.117 00:12:55.117 real 0m4.840s 00:12:55.117 user 0m5.705s 00:12:55.118 sys 0m0.599s 00:12:55.118 11:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.118 11:45:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.118 ************************************ 00:12:55.118 END TEST raid_read_error_test 00:12:55.118 ************************************ 00:12:55.118 11:45:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:55.118 11:45:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:55.118 11:45:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.118 11:45:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.118 ************************************ 00:12:55.118 START TEST raid_write_error_test 00:12:55.118 ************************************ 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WAWt9tY1Ir 00:12:55.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75397 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75397 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75397 ']' 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:55.118 11:45:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.118 [2024-11-04 11:45:20.596721] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:12:55.118 [2024-11-04 11:45:20.596937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75397 ] 00:12:55.378 [2024-11-04 11:45:20.767853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.378 [2024-11-04 11:45:20.894258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.637 [2024-11-04 11:45:21.119105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.637 [2024-11-04 11:45:21.119242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 BaseBdev1_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 true 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 [2024-11-04 11:45:21.538834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:56.206 [2024-11-04 11:45:21.538890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.206 [2024-11-04 11:45:21.538911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:56.206 [2024-11-04 11:45:21.538922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.206 [2024-11-04 11:45:21.541039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.206 [2024-11-04 11:45:21.541135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.206 BaseBdev1 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 BaseBdev2_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 true 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 [2024-11-04 11:45:21.608201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:56.206 [2024-11-04 11:45:21.608267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.206 [2024-11-04 11:45:21.608288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:56.206 [2024-11-04 11:45:21.608301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.206 [2024-11-04 11:45:21.610643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.206 [2024-11-04 11:45:21.610682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.206 BaseBdev2 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 BaseBdev3_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 true 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 [2024-11-04 11:45:21.691617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:56.206 [2024-11-04 11:45:21.691673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.206 [2024-11-04 11:45:21.691693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:56.206 [2024-11-04 11:45:21.691703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.206 [2024-11-04 11:45:21.693800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.206 [2024-11-04 11:45:21.693841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:56.206 BaseBdev3 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.206 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.466 BaseBdev4_malloc 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.466 true 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.466 [2024-11-04 11:45:21.762161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:56.466 [2024-11-04 11:45:21.762279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.466 [2024-11-04 11:45:21.762308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:56.466 [2024-11-04 11:45:21.762321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.466 [2024-11-04 11:45:21.764702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.466 [2024-11-04 11:45:21.764744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:56.466 BaseBdev4 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.466 [2024-11-04 11:45:21.774202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.466 [2024-11-04 11:45:21.776207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.466 [2024-11-04 11:45:21.776289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.466 [2024-11-04 11:45:21.776360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.466 [2024-11-04 11:45:21.776619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:56.466 [2024-11-04 11:45:21.776635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.466 [2024-11-04 11:45:21.776903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:56.466 [2024-11-04 11:45:21.777084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:56.466 [2024-11-04 11:45:21.777094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:56.466 [2024-11-04 11:45:21.777260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.466 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.467 "name": "raid_bdev1", 00:12:56.467 "uuid": "3c7fa333-d8cc-4acd-8f42-56c4f3c5fda8", 00:12:56.467 "strip_size_kb": 0, 00:12:56.467 "state": "online", 00:12:56.467 "raid_level": "raid1", 00:12:56.467 "superblock": true, 00:12:56.467 "num_base_bdevs": 4, 00:12:56.467 "num_base_bdevs_discovered": 4, 00:12:56.467 "num_base_bdevs_operational": 4, 00:12:56.467 "base_bdevs_list": [ 00:12:56.467 { 00:12:56.467 "name": "BaseBdev1", 00:12:56.467 "uuid": "44dd90e1-2ed1-5e63-bc3d-0ce1f704d0bb", 00:12:56.467 "is_configured": true, 00:12:56.467 "data_offset": 2048, 00:12:56.467 "data_size": 63488 00:12:56.467 }, 00:12:56.467 { 00:12:56.467 "name": "BaseBdev2", 00:12:56.467 "uuid": "016a222f-6144-5fec-b1c4-4c37a124727d", 00:12:56.467 "is_configured": true, 00:12:56.467 "data_offset": 2048, 00:12:56.467 "data_size": 63488 00:12:56.467 }, 00:12:56.467 { 00:12:56.467 "name": "BaseBdev3", 00:12:56.467 "uuid": "9ee04f45-a2a6-5c95-bc91-79bbaadbc201", 00:12:56.467 "is_configured": true, 00:12:56.467 "data_offset": 2048, 00:12:56.467 "data_size": 63488 00:12:56.467 }, 00:12:56.467 { 00:12:56.467 "name": "BaseBdev4", 00:12:56.467 "uuid": "c5558f0a-cadf-51cc-8a3a-ef2d4f54f661", 00:12:56.467 "is_configured": true, 00:12:56.467 "data_offset": 2048, 00:12:56.467 "data_size": 63488 00:12:56.467 } 00:12:56.467 ] 00:12:56.467 }' 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.467 11:45:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.726 11:45:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:56.726 11:45:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.986 [2024-11-04 11:45:22.310932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.923 [2024-11-04 11:45:23.226264] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:57.923 [2024-11-04 11:45:23.226325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.923 [2024-11-04 11:45:23.226587] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.923 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.924 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.924 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.924 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.924 "name": "raid_bdev1", 00:12:57.924 "uuid": "3c7fa333-d8cc-4acd-8f42-56c4f3c5fda8", 00:12:57.924 "strip_size_kb": 0, 00:12:57.924 "state": "online", 00:12:57.924 "raid_level": "raid1", 00:12:57.924 "superblock": true, 00:12:57.924 "num_base_bdevs": 4, 00:12:57.924 "num_base_bdevs_discovered": 3, 00:12:57.924 "num_base_bdevs_operational": 3, 00:12:57.924 "base_bdevs_list": [ 00:12:57.924 { 00:12:57.924 "name": null, 00:12:57.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.924 "is_configured": false, 00:12:57.924 "data_offset": 0, 00:12:57.924 "data_size": 63488 00:12:57.924 }, 00:12:57.924 { 00:12:57.924 "name": "BaseBdev2", 00:12:57.924 "uuid": "016a222f-6144-5fec-b1c4-4c37a124727d", 00:12:57.924 "is_configured": true, 00:12:57.924 "data_offset": 2048, 00:12:57.924 "data_size": 63488 00:12:57.924 }, 00:12:57.924 { 00:12:57.924 "name": "BaseBdev3", 00:12:57.924 "uuid": "9ee04f45-a2a6-5c95-bc91-79bbaadbc201", 00:12:57.924 "is_configured": true, 00:12:57.924 "data_offset": 2048, 00:12:57.924 "data_size": 63488 00:12:57.924 }, 00:12:57.924 { 00:12:57.924 "name": "BaseBdev4", 00:12:57.924 "uuid": "c5558f0a-cadf-51cc-8a3a-ef2d4f54f661", 00:12:57.924 "is_configured": true, 00:12:57.924 "data_offset": 2048, 00:12:57.924 "data_size": 63488 00:12:57.924 } 00:12:57.924 ] 00:12:57.924 }' 00:12:57.924 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.924 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 [2024-11-04 11:45:23.669928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.182 [2024-11-04 11:45:23.670013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.182 [2024-11-04 11:45:23.672669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.182 [2024-11-04 11:45:23.672753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.182 [2024-11-04 11:45:23.672881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.182 [2024-11-04 11:45:23.672930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:58.182 { 00:12:58.182 "results": [ 00:12:58.182 { 00:12:58.182 "job": "raid_bdev1", 00:12:58.182 "core_mask": "0x1", 00:12:58.182 "workload": "randrw", 00:12:58.182 "percentage": 50, 00:12:58.182 "status": "finished", 00:12:58.182 "queue_depth": 1, 00:12:58.182 "io_size": 131072, 00:12:58.182 "runtime": 1.35961, 00:12:58.182 "iops": 10969.322085009671, 00:12:58.182 "mibps": 1371.165260626209, 00:12:58.182 "io_failed": 0, 00:12:58.182 "io_timeout": 0, 00:12:58.182 "avg_latency_us": 88.37005457197685, 00:12:58.182 "min_latency_us": 23.699563318777294, 00:12:58.182 "max_latency_us": 1459.5353711790392 00:12:58.182 } 00:12:58.182 ], 00:12:58.182 "core_count": 1 00:12:58.182 } 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75397 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75397 ']' 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75397 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:58.182 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75397 00:12:58.441 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:58.441 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:58.441 killing process with pid 75397 00:12:58.441 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75397' 00:12:58.441 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75397 00:12:58.441 [2024-11-04 11:45:23.719002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.441 11:45:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75397 00:12:58.701 [2024-11-04 11:45:24.068101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WAWt9tY1Ir 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:00.084 ************************************ 00:13:00.084 END TEST raid_write_error_test 00:13:00.084 ************************************ 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:00.084 00:13:00.084 real 0m4.813s 00:13:00.084 user 0m5.684s 00:13:00.084 sys 0m0.582s 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.084 11:45:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.084 11:45:25 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:00.084 11:45:25 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:00.084 11:45:25 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:00.084 11:45:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:00.084 11:45:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.084 11:45:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.084 ************************************ 00:13:00.084 START TEST raid_rebuild_test 00:13:00.084 ************************************ 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75541 00:13:00.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75541 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75541 ']' 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.084 11:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.084 [2024-11-04 11:45:25.469564] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:13:00.084 [2024-11-04 11:45:25.469760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:00.084 Zero copy mechanism will not be used. 00:13:00.084 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75541 ] 00:13:00.343 [2024-11-04 11:45:25.627383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.343 [2024-11-04 11:45:25.746997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.602 [2024-11-04 11:45:25.959243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.602 [2024-11-04 11:45:25.959370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.862 BaseBdev1_malloc 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.862 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 [2024-11-04 11:45:26.388901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.122 [2024-11-04 11:45:26.389034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.122 [2024-11-04 11:45:26.389082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.122 [2024-11-04 11:45:26.389119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.122 [2024-11-04 11:45:26.391398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.122 [2024-11-04 11:45:26.391484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.122 BaseBdev1 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 BaseBdev2_malloc 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 [2024-11-04 11:45:26.446944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.122 [2024-11-04 11:45:26.447013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.122 [2024-11-04 11:45:26.447033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.122 [2024-11-04 11:45:26.447044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.122 [2024-11-04 11:45:26.449202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.122 [2024-11-04 11:45:26.449293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.122 BaseBdev2 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 spare_malloc 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 spare_delay 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 [2024-11-04 11:45:26.530028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.122 [2024-11-04 11:45:26.530141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.122 [2024-11-04 11:45:26.530169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:01.122 [2024-11-04 11:45:26.530181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.122 [2024-11-04 11:45:26.532657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.122 [2024-11-04 11:45:26.532702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.122 spare 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 [2024-11-04 11:45:26.542067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.122 [2024-11-04 11:45:26.544104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.122 [2024-11-04 11:45:26.544240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:01.122 [2024-11-04 11:45:26.544260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:01.122 [2024-11-04 11:45:26.544589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:01.122 [2024-11-04 11:45:26.544772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:01.122 [2024-11-04 11:45:26.544792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:01.122 [2024-11-04 11:45:26.544979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.122 "name": "raid_bdev1", 00:13:01.122 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:01.122 "strip_size_kb": 0, 00:13:01.122 "state": "online", 00:13:01.122 "raid_level": "raid1", 00:13:01.122 "superblock": false, 00:13:01.122 "num_base_bdevs": 2, 00:13:01.122 "num_base_bdevs_discovered": 2, 00:13:01.122 "num_base_bdevs_operational": 2, 00:13:01.122 "base_bdevs_list": [ 00:13:01.122 { 00:13:01.122 "name": "BaseBdev1", 00:13:01.122 "uuid": "7183bbcc-8a20-525b-ad95-b7bdb0323540", 00:13:01.122 "is_configured": true, 00:13:01.122 "data_offset": 0, 00:13:01.122 "data_size": 65536 00:13:01.122 }, 00:13:01.122 { 00:13:01.122 "name": "BaseBdev2", 00:13:01.122 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:01.122 "is_configured": true, 00:13:01.122 "data_offset": 0, 00:13:01.122 "data_size": 65536 00:13:01.122 } 00:13:01.122 ] 00:13:01.122 }' 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.122 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.691 [2024-11-04 11:45:26.965668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.691 11:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:01.691 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:01.950 [2024-11-04 11:45:27.244958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:01.950 /dev/nbd0 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.950 1+0 records in 00:13:01.950 1+0 records out 00:13:01.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505619 s, 8.1 MB/s 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:01.950 11:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:06.145 65536+0 records in 00:13:06.145 65536+0 records out 00:13:06.145 33554432 bytes (34 MB, 32 MiB) copied, 4.28948 s, 7.8 MB/s 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.145 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.405 [2024-11-04 11:45:31.836615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.405 [2024-11-04 11:45:31.865280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.405 "name": "raid_bdev1", 00:13:06.405 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:06.405 "strip_size_kb": 0, 00:13:06.405 "state": "online", 00:13:06.405 "raid_level": "raid1", 00:13:06.405 "superblock": false, 00:13:06.405 "num_base_bdevs": 2, 00:13:06.405 "num_base_bdevs_discovered": 1, 00:13:06.405 "num_base_bdevs_operational": 1, 00:13:06.405 "base_bdevs_list": [ 00:13:06.405 { 00:13:06.405 "name": null, 00:13:06.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.405 "is_configured": false, 00:13:06.405 "data_offset": 0, 00:13:06.405 "data_size": 65536 00:13:06.405 }, 00:13:06.405 { 00:13:06.405 "name": "BaseBdev2", 00:13:06.405 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:06.405 "is_configured": true, 00:13:06.405 "data_offset": 0, 00:13:06.405 "data_size": 65536 00:13:06.405 } 00:13:06.405 ] 00:13:06.405 }' 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.405 11:45:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.973 11:45:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.973 11:45:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.973 11:45:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.973 [2024-11-04 11:45:32.320546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.973 [2024-11-04 11:45:32.340771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:06.973 11:45:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.973 11:45:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:06.973 [2024-11-04 11:45:32.342962] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.909 "name": "raid_bdev1", 00:13:07.909 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:07.909 "strip_size_kb": 0, 00:13:07.909 "state": "online", 00:13:07.909 "raid_level": "raid1", 00:13:07.909 "superblock": false, 00:13:07.909 "num_base_bdevs": 2, 00:13:07.909 "num_base_bdevs_discovered": 2, 00:13:07.909 "num_base_bdevs_operational": 2, 00:13:07.909 "process": { 00:13:07.909 "type": "rebuild", 00:13:07.909 "target": "spare", 00:13:07.909 "progress": { 00:13:07.909 "blocks": 20480, 00:13:07.909 "percent": 31 00:13:07.909 } 00:13:07.909 }, 00:13:07.909 "base_bdevs_list": [ 00:13:07.909 { 00:13:07.909 "name": "spare", 00:13:07.909 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:07.909 "is_configured": true, 00:13:07.909 "data_offset": 0, 00:13:07.909 "data_size": 65536 00:13:07.909 }, 00:13:07.909 { 00:13:07.909 "name": "BaseBdev2", 00:13:07.909 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:07.909 "is_configured": true, 00:13:07.909 "data_offset": 0, 00:13:07.909 "data_size": 65536 00:13:07.909 } 00:13:07.909 ] 00:13:07.909 }' 00:13:07.909 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.168 [2024-11-04 11:45:33.501862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.168 [2024-11-04 11:45:33.548878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.168 [2024-11-04 11:45:33.549055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.168 [2024-11-04 11:45:33.549099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.168 [2024-11-04 11:45:33.549154] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.168 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.168 "name": "raid_bdev1", 00:13:08.168 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:08.168 "strip_size_kb": 0, 00:13:08.168 "state": "online", 00:13:08.168 "raid_level": "raid1", 00:13:08.168 "superblock": false, 00:13:08.168 "num_base_bdevs": 2, 00:13:08.168 "num_base_bdevs_discovered": 1, 00:13:08.168 "num_base_bdevs_operational": 1, 00:13:08.168 "base_bdevs_list": [ 00:13:08.168 { 00:13:08.168 "name": null, 00:13:08.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.168 "is_configured": false, 00:13:08.168 "data_offset": 0, 00:13:08.168 "data_size": 65536 00:13:08.168 }, 00:13:08.168 { 00:13:08.169 "name": "BaseBdev2", 00:13:08.169 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:08.169 "is_configured": true, 00:13:08.169 "data_offset": 0, 00:13:08.169 "data_size": 65536 00:13:08.169 } 00:13:08.169 ] 00:13:08.169 }' 00:13:08.169 11:45:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.169 11:45:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.742 "name": "raid_bdev1", 00:13:08.742 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:08.742 "strip_size_kb": 0, 00:13:08.742 "state": "online", 00:13:08.742 "raid_level": "raid1", 00:13:08.742 "superblock": false, 00:13:08.742 "num_base_bdevs": 2, 00:13:08.742 "num_base_bdevs_discovered": 1, 00:13:08.742 "num_base_bdevs_operational": 1, 00:13:08.742 "base_bdevs_list": [ 00:13:08.742 { 00:13:08.742 "name": null, 00:13:08.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.742 "is_configured": false, 00:13:08.742 "data_offset": 0, 00:13:08.742 "data_size": 65536 00:13:08.742 }, 00:13:08.742 { 00:13:08.742 "name": "BaseBdev2", 00:13:08.742 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:08.742 "is_configured": true, 00:13:08.742 "data_offset": 0, 00:13:08.742 "data_size": 65536 00:13:08.742 } 00:13:08.742 ] 00:13:08.742 }' 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.742 [2024-11-04 11:45:34.180355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:08.742 [2024-11-04 11:45:34.196749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.742 11:45:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:08.742 [2024-11-04 11:45:34.198623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.691 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.950 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.950 "name": "raid_bdev1", 00:13:09.950 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:09.950 "strip_size_kb": 0, 00:13:09.950 "state": "online", 00:13:09.951 "raid_level": "raid1", 00:13:09.951 "superblock": false, 00:13:09.951 "num_base_bdevs": 2, 00:13:09.951 "num_base_bdevs_discovered": 2, 00:13:09.951 "num_base_bdevs_operational": 2, 00:13:09.951 "process": { 00:13:09.951 "type": "rebuild", 00:13:09.951 "target": "spare", 00:13:09.951 "progress": { 00:13:09.951 "blocks": 20480, 00:13:09.951 "percent": 31 00:13:09.951 } 00:13:09.951 }, 00:13:09.951 "base_bdevs_list": [ 00:13:09.951 { 00:13:09.951 "name": "spare", 00:13:09.951 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:09.951 "is_configured": true, 00:13:09.951 "data_offset": 0, 00:13:09.951 "data_size": 65536 00:13:09.951 }, 00:13:09.951 { 00:13:09.951 "name": "BaseBdev2", 00:13:09.951 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:09.951 "is_configured": true, 00:13:09.951 "data_offset": 0, 00:13:09.951 "data_size": 65536 00:13:09.951 } 00:13:09.951 ] 00:13:09.951 }' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=377 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.951 "name": "raid_bdev1", 00:13:09.951 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:09.951 "strip_size_kb": 0, 00:13:09.951 "state": "online", 00:13:09.951 "raid_level": "raid1", 00:13:09.951 "superblock": false, 00:13:09.951 "num_base_bdevs": 2, 00:13:09.951 "num_base_bdevs_discovered": 2, 00:13:09.951 "num_base_bdevs_operational": 2, 00:13:09.951 "process": { 00:13:09.951 "type": "rebuild", 00:13:09.951 "target": "spare", 00:13:09.951 "progress": { 00:13:09.951 "blocks": 22528, 00:13:09.951 "percent": 34 00:13:09.951 } 00:13:09.951 }, 00:13:09.951 "base_bdevs_list": [ 00:13:09.951 { 00:13:09.951 "name": "spare", 00:13:09.951 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:09.951 "is_configured": true, 00:13:09.951 "data_offset": 0, 00:13:09.951 "data_size": 65536 00:13:09.951 }, 00:13:09.951 { 00:13:09.951 "name": "BaseBdev2", 00:13:09.951 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:09.951 "is_configured": true, 00:13:09.951 "data_offset": 0, 00:13:09.951 "data_size": 65536 00:13:09.951 } 00:13:09.951 ] 00:13:09.951 }' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.951 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.210 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.210 11:45:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.147 "name": "raid_bdev1", 00:13:11.147 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:11.147 "strip_size_kb": 0, 00:13:11.147 "state": "online", 00:13:11.147 "raid_level": "raid1", 00:13:11.147 "superblock": false, 00:13:11.147 "num_base_bdevs": 2, 00:13:11.147 "num_base_bdevs_discovered": 2, 00:13:11.147 "num_base_bdevs_operational": 2, 00:13:11.147 "process": { 00:13:11.147 "type": "rebuild", 00:13:11.147 "target": "spare", 00:13:11.147 "progress": { 00:13:11.147 "blocks": 45056, 00:13:11.147 "percent": 68 00:13:11.147 } 00:13:11.147 }, 00:13:11.147 "base_bdevs_list": [ 00:13:11.147 { 00:13:11.147 "name": "spare", 00:13:11.147 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:11.147 "is_configured": true, 00:13:11.147 "data_offset": 0, 00:13:11.147 "data_size": 65536 00:13:11.147 }, 00:13:11.147 { 00:13:11.147 "name": "BaseBdev2", 00:13:11.147 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:11.147 "is_configured": true, 00:13:11.147 "data_offset": 0, 00:13:11.147 "data_size": 65536 00:13:11.147 } 00:13:11.147 ] 00:13:11.147 }' 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.147 11:45:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.086 [2024-11-04 11:45:37.413564] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:12.086 [2024-11-04 11:45:37.413732] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:12.086 [2024-11-04 11:45:37.413824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.346 "name": "raid_bdev1", 00:13:12.346 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:12.346 "strip_size_kb": 0, 00:13:12.346 "state": "online", 00:13:12.346 "raid_level": "raid1", 00:13:12.346 "superblock": false, 00:13:12.346 "num_base_bdevs": 2, 00:13:12.346 "num_base_bdevs_discovered": 2, 00:13:12.346 "num_base_bdevs_operational": 2, 00:13:12.346 "base_bdevs_list": [ 00:13:12.346 { 00:13:12.346 "name": "spare", 00:13:12.346 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:12.346 "is_configured": true, 00:13:12.346 "data_offset": 0, 00:13:12.346 "data_size": 65536 00:13:12.346 }, 00:13:12.346 { 00:13:12.346 "name": "BaseBdev2", 00:13:12.346 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:12.346 "is_configured": true, 00:13:12.346 "data_offset": 0, 00:13:12.346 "data_size": 65536 00:13:12.346 } 00:13:12.346 ] 00:13:12.346 }' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.346 "name": "raid_bdev1", 00:13:12.346 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:12.346 "strip_size_kb": 0, 00:13:12.346 "state": "online", 00:13:12.346 "raid_level": "raid1", 00:13:12.346 "superblock": false, 00:13:12.346 "num_base_bdevs": 2, 00:13:12.346 "num_base_bdevs_discovered": 2, 00:13:12.346 "num_base_bdevs_operational": 2, 00:13:12.346 "base_bdevs_list": [ 00:13:12.346 { 00:13:12.346 "name": "spare", 00:13:12.346 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:12.346 "is_configured": true, 00:13:12.346 "data_offset": 0, 00:13:12.346 "data_size": 65536 00:13:12.346 }, 00:13:12.346 { 00:13:12.346 "name": "BaseBdev2", 00:13:12.346 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:12.346 "is_configured": true, 00:13:12.346 "data_offset": 0, 00:13:12.346 "data_size": 65536 00:13:12.346 } 00:13:12.346 ] 00:13:12.346 }' 00:13:12.346 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.605 "name": "raid_bdev1", 00:13:12.605 "uuid": "87210a17-49c7-4aed-b8b5-aef070c26cd6", 00:13:12.605 "strip_size_kb": 0, 00:13:12.605 "state": "online", 00:13:12.605 "raid_level": "raid1", 00:13:12.605 "superblock": false, 00:13:12.605 "num_base_bdevs": 2, 00:13:12.605 "num_base_bdevs_discovered": 2, 00:13:12.605 "num_base_bdevs_operational": 2, 00:13:12.605 "base_bdevs_list": [ 00:13:12.605 { 00:13:12.605 "name": "spare", 00:13:12.605 "uuid": "67e68aea-3ca4-5142-a0da-bbd195d9bd97", 00:13:12.605 "is_configured": true, 00:13:12.605 "data_offset": 0, 00:13:12.605 "data_size": 65536 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "name": "BaseBdev2", 00:13:12.605 "uuid": "173a8747-0e20-5026-8bf0-f2622b1640cf", 00:13:12.605 "is_configured": true, 00:13:12.605 "data_offset": 0, 00:13:12.605 "data_size": 65536 00:13:12.605 } 00:13:12.605 ] 00:13:12.605 }' 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.605 11:45:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.863 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.863 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.863 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.121 [2024-11-04 11:45:38.386849] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.121 [2024-11-04 11:45:38.386931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.121 [2024-11-04 11:45:38.387047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.121 [2024-11-04 11:45:38.387141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.121 [2024-11-04 11:45:38.387154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.122 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:13.122 /dev/nbd0 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.387 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.388 1+0 records in 00:13:13.388 1+0 records out 00:13:13.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319683 s, 12.8 MB/s 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.388 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:13.388 /dev/nbd1 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:13.647 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.648 1+0 records in 00:13:13.648 1+0 records out 00:13:13.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264174 s, 15.5 MB/s 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:13.648 11:45:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.648 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.907 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75541 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75541 ']' 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75541 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75541 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:14.165 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:14.166 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75541' 00:13:14.166 killing process with pid 75541 00:13:14.166 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75541 00:13:14.166 Received shutdown signal, test time was about 60.000000 seconds 00:13:14.166 00:13:14.166 Latency(us) 00:13:14.166 [2024-11-04T11:45:39.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.166 [2024-11-04T11:45:39.688Z] =================================================================================================================== 00:13:14.166 [2024-11-04T11:45:39.688Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.166 11:45:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75541 00:13:14.166 [2024-11-04 11:45:39.663070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.732 [2024-11-04 11:45:39.976095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.670 11:45:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:15.670 00:13:15.670 real 0m15.752s 00:13:15.670 user 0m17.943s 00:13:15.670 sys 0m3.021s 00:13:15.670 11:45:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.670 ************************************ 00:13:15.670 END TEST raid_rebuild_test 00:13:15.670 ************************************ 00:13:15.670 11:45:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.670 11:45:41 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:15.670 11:45:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:15.670 11:45:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.670 11:45:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.929 ************************************ 00:13:15.929 START TEST raid_rebuild_test_sb 00:13:15.929 ************************************ 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75959 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75959 00:13:15.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75959 ']' 00:13:15.929 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.930 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.930 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.930 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.930 11:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 [2024-11-04 11:45:41.306059] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:13:15.930 [2024-11-04 11:45:41.306638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75959 ] 00:13:15.930 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.930 Zero copy mechanism will not be used. 00:13:16.188 [2024-11-04 11:45:41.481196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.188 [2024-11-04 11:45:41.606406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.446 [2024-11-04 11:45:41.822665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.446 [2024-11-04 11:45:41.822801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.704 BaseBdev1_malloc 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.704 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.705 [2024-11-04 11:45:42.220038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.705 [2024-11-04 11:45:42.220194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.705 [2024-11-04 11:45:42.220248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:16.705 [2024-11-04 11:45:42.220313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.705 [2024-11-04 11:45:42.222858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.705 [2024-11-04 11:45:42.222951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.705 BaseBdev1 00:13:16.705 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.705 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.705 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.705 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.705 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 BaseBdev2_malloc 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 [2024-11-04 11:45:42.278913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:16.990 [2024-11-04 11:45:42.279035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.990 [2024-11-04 11:45:42.279063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:16.990 [2024-11-04 11:45:42.279078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.990 [2024-11-04 11:45:42.281546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.990 [2024-11-04 11:45:42.281588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.990 BaseBdev2 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 spare_malloc 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 spare_delay 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 [2024-11-04 11:45:42.355503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.990 [2024-11-04 11:45:42.355589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.990 [2024-11-04 11:45:42.355615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:16.990 [2024-11-04 11:45:42.355627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.990 [2024-11-04 11:45:42.358070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.990 [2024-11-04 11:45:42.358116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.990 spare 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 [2024-11-04 11:45:42.363554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.990 [2024-11-04 11:45:42.365644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.990 [2024-11-04 11:45:42.365842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:16.990 [2024-11-04 11:45:42.365862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.990 [2024-11-04 11:45:42.366153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.990 [2024-11-04 11:45:42.366352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:16.990 [2024-11-04 11:45:42.366363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:16.990 [2024-11-04 11:45:42.366571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.990 "name": "raid_bdev1", 00:13:16.990 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:16.990 "strip_size_kb": 0, 00:13:16.990 "state": "online", 00:13:16.990 "raid_level": "raid1", 00:13:16.990 "superblock": true, 00:13:16.990 "num_base_bdevs": 2, 00:13:16.990 "num_base_bdevs_discovered": 2, 00:13:16.990 "num_base_bdevs_operational": 2, 00:13:16.990 "base_bdevs_list": [ 00:13:16.990 { 00:13:16.990 "name": "BaseBdev1", 00:13:16.990 "uuid": "c6de05be-93fe-5601-b5d2-6e39a5ed9794", 00:13:16.990 "is_configured": true, 00:13:16.990 "data_offset": 2048, 00:13:16.990 "data_size": 63488 00:13:16.990 }, 00:13:16.990 { 00:13:16.990 "name": "BaseBdev2", 00:13:16.990 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:16.990 "is_configured": true, 00:13:16.990 "data_offset": 2048, 00:13:16.990 "data_size": 63488 00:13:16.990 } 00:13:16.990 ] 00:13:16.990 }' 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.990 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 [2024-11-04 11:45:42.791123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.556 11:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:17.814 [2024-11-04 11:45:43.122307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:17.814 /dev/nbd0 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.814 1+0 records in 00:13:17.814 1+0 records out 00:13:17.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371042 s, 11.0 MB/s 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:17.814 11:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:22.001 63488+0 records in 00:13:22.001 63488+0 records out 00:13:22.001 32505856 bytes (33 MB, 31 MiB) copied, 4.2408 s, 7.7 MB/s 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.001 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.260 [2024-11-04 11:45:47.680517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.260 [2024-11-04 11:45:47.718354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.260 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.260 "name": "raid_bdev1", 00:13:22.260 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:22.260 "strip_size_kb": 0, 00:13:22.260 "state": "online", 00:13:22.260 "raid_level": "raid1", 00:13:22.260 "superblock": true, 00:13:22.260 "num_base_bdevs": 2, 00:13:22.260 "num_base_bdevs_discovered": 1, 00:13:22.260 "num_base_bdevs_operational": 1, 00:13:22.260 "base_bdevs_list": [ 00:13:22.260 { 00:13:22.260 "name": null, 00:13:22.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.260 "is_configured": false, 00:13:22.260 "data_offset": 0, 00:13:22.260 "data_size": 63488 00:13:22.260 }, 00:13:22.260 { 00:13:22.261 "name": "BaseBdev2", 00:13:22.261 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:22.261 "is_configured": true, 00:13:22.261 "data_offset": 2048, 00:13:22.261 "data_size": 63488 00:13:22.261 } 00:13:22.261 ] 00:13:22.261 }' 00:13:22.261 11:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.261 11:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.830 11:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.830 11:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.830 11:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.830 [2024-11-04 11:45:48.161633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.830 [2024-11-04 11:45:48.179451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:22.830 11:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.830 11:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.830 [2024-11-04 11:45:48.181578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.769 "name": "raid_bdev1", 00:13:23.769 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:23.769 "strip_size_kb": 0, 00:13:23.769 "state": "online", 00:13:23.769 "raid_level": "raid1", 00:13:23.769 "superblock": true, 00:13:23.769 "num_base_bdevs": 2, 00:13:23.769 "num_base_bdevs_discovered": 2, 00:13:23.769 "num_base_bdevs_operational": 2, 00:13:23.769 "process": { 00:13:23.769 "type": "rebuild", 00:13:23.769 "target": "spare", 00:13:23.769 "progress": { 00:13:23.769 "blocks": 20480, 00:13:23.769 "percent": 32 00:13:23.769 } 00:13:23.769 }, 00:13:23.769 "base_bdevs_list": [ 00:13:23.769 { 00:13:23.769 "name": "spare", 00:13:23.769 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:23.769 "is_configured": true, 00:13:23.769 "data_offset": 2048, 00:13:23.769 "data_size": 63488 00:13:23.769 }, 00:13:23.769 { 00:13:23.769 "name": "BaseBdev2", 00:13:23.769 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:23.769 "is_configured": true, 00:13:23.769 "data_offset": 2048, 00:13:23.769 "data_size": 63488 00:13:23.769 } 00:13:23.769 ] 00:13:23.769 }' 00:13:23.769 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.028 [2024-11-04 11:45:49.352971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.028 [2024-11-04 11:45:49.387454] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:24.028 [2024-11-04 11:45:49.387541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.028 [2024-11-04 11:45:49.387559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.028 [2024-11-04 11:45:49.387574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.028 "name": "raid_bdev1", 00:13:24.028 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:24.028 "strip_size_kb": 0, 00:13:24.028 "state": "online", 00:13:24.028 "raid_level": "raid1", 00:13:24.028 "superblock": true, 00:13:24.028 "num_base_bdevs": 2, 00:13:24.028 "num_base_bdevs_discovered": 1, 00:13:24.028 "num_base_bdevs_operational": 1, 00:13:24.028 "base_bdevs_list": [ 00:13:24.028 { 00:13:24.028 "name": null, 00:13:24.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.028 "is_configured": false, 00:13:24.028 "data_offset": 0, 00:13:24.028 "data_size": 63488 00:13:24.028 }, 00:13:24.028 { 00:13:24.028 "name": "BaseBdev2", 00:13:24.028 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:24.028 "is_configured": true, 00:13:24.028 "data_offset": 2048, 00:13:24.028 "data_size": 63488 00:13:24.028 } 00:13:24.028 ] 00:13:24.028 }' 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.028 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.595 "name": "raid_bdev1", 00:13:24.595 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:24.595 "strip_size_kb": 0, 00:13:24.595 "state": "online", 00:13:24.595 "raid_level": "raid1", 00:13:24.595 "superblock": true, 00:13:24.595 "num_base_bdevs": 2, 00:13:24.595 "num_base_bdevs_discovered": 1, 00:13:24.595 "num_base_bdevs_operational": 1, 00:13:24.595 "base_bdevs_list": [ 00:13:24.595 { 00:13:24.595 "name": null, 00:13:24.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.595 "is_configured": false, 00:13:24.595 "data_offset": 0, 00:13:24.595 "data_size": 63488 00:13:24.595 }, 00:13:24.595 { 00:13:24.595 "name": "BaseBdev2", 00:13:24.595 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:24.595 "is_configured": true, 00:13:24.595 "data_offset": 2048, 00:13:24.595 "data_size": 63488 00:13:24.595 } 00:13:24.595 ] 00:13:24.595 }' 00:13:24.595 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.596 [2024-11-04 11:45:49.954611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.596 [2024-11-04 11:45:49.971495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.596 11:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.596 [2024-11-04 11:45:49.973550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.536 11:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.536 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.536 "name": "raid_bdev1", 00:13:25.536 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:25.536 "strip_size_kb": 0, 00:13:25.536 "state": "online", 00:13:25.536 "raid_level": "raid1", 00:13:25.536 "superblock": true, 00:13:25.536 "num_base_bdevs": 2, 00:13:25.536 "num_base_bdevs_discovered": 2, 00:13:25.536 "num_base_bdevs_operational": 2, 00:13:25.536 "process": { 00:13:25.536 "type": "rebuild", 00:13:25.536 "target": "spare", 00:13:25.536 "progress": { 00:13:25.536 "blocks": 20480, 00:13:25.536 "percent": 32 00:13:25.536 } 00:13:25.536 }, 00:13:25.536 "base_bdevs_list": [ 00:13:25.536 { 00:13:25.536 "name": "spare", 00:13:25.536 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:25.536 "is_configured": true, 00:13:25.536 "data_offset": 2048, 00:13:25.536 "data_size": 63488 00:13:25.536 }, 00:13:25.536 { 00:13:25.536 "name": "BaseBdev2", 00:13:25.536 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:25.536 "is_configured": true, 00:13:25.536 "data_offset": 2048, 00:13:25.536 "data_size": 63488 00:13:25.536 } 00:13:25.536 ] 00:13:25.536 }' 00:13:25.536 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:25.796 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.796 "name": "raid_bdev1", 00:13:25.796 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:25.796 "strip_size_kb": 0, 00:13:25.796 "state": "online", 00:13:25.796 "raid_level": "raid1", 00:13:25.796 "superblock": true, 00:13:25.796 "num_base_bdevs": 2, 00:13:25.796 "num_base_bdevs_discovered": 2, 00:13:25.796 "num_base_bdevs_operational": 2, 00:13:25.796 "process": { 00:13:25.796 "type": "rebuild", 00:13:25.796 "target": "spare", 00:13:25.796 "progress": { 00:13:25.796 "blocks": 22528, 00:13:25.796 "percent": 35 00:13:25.796 } 00:13:25.796 }, 00:13:25.796 "base_bdevs_list": [ 00:13:25.796 { 00:13:25.796 "name": "spare", 00:13:25.796 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:25.796 "is_configured": true, 00:13:25.796 "data_offset": 2048, 00:13:25.796 "data_size": 63488 00:13:25.796 }, 00:13:25.796 { 00:13:25.796 "name": "BaseBdev2", 00:13:25.796 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:25.796 "is_configured": true, 00:13:25.796 "data_offset": 2048, 00:13:25.796 "data_size": 63488 00:13:25.796 } 00:13:25.796 ] 00:13:25.796 }' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.796 11:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.180 "name": "raid_bdev1", 00:13:27.180 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:27.180 "strip_size_kb": 0, 00:13:27.180 "state": "online", 00:13:27.180 "raid_level": "raid1", 00:13:27.180 "superblock": true, 00:13:27.180 "num_base_bdevs": 2, 00:13:27.180 "num_base_bdevs_discovered": 2, 00:13:27.180 "num_base_bdevs_operational": 2, 00:13:27.180 "process": { 00:13:27.180 "type": "rebuild", 00:13:27.180 "target": "spare", 00:13:27.180 "progress": { 00:13:27.180 "blocks": 45056, 00:13:27.180 "percent": 70 00:13:27.180 } 00:13:27.180 }, 00:13:27.180 "base_bdevs_list": [ 00:13:27.180 { 00:13:27.180 "name": "spare", 00:13:27.180 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:27.180 "is_configured": true, 00:13:27.180 "data_offset": 2048, 00:13:27.180 "data_size": 63488 00:13:27.180 }, 00:13:27.180 { 00:13:27.180 "name": "BaseBdev2", 00:13:27.180 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:27.180 "is_configured": true, 00:13:27.180 "data_offset": 2048, 00:13:27.180 "data_size": 63488 00:13:27.180 } 00:13:27.180 ] 00:13:27.180 }' 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.180 11:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.744 [2024-11-04 11:45:53.088446] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.744 [2024-11-04 11:45:53.088544] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.744 [2024-11-04 11:45:53.088668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.001 "name": "raid_bdev1", 00:13:28.001 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:28.001 "strip_size_kb": 0, 00:13:28.001 "state": "online", 00:13:28.001 "raid_level": "raid1", 00:13:28.001 "superblock": true, 00:13:28.001 "num_base_bdevs": 2, 00:13:28.001 "num_base_bdevs_discovered": 2, 00:13:28.001 "num_base_bdevs_operational": 2, 00:13:28.001 "base_bdevs_list": [ 00:13:28.001 { 00:13:28.001 "name": "spare", 00:13:28.001 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 }, 00:13:28.001 { 00:13:28.001 "name": "BaseBdev2", 00:13:28.001 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 } 00:13:28.001 ] 00:13:28.001 }' 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.001 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.260 "name": "raid_bdev1", 00:13:28.260 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:28.260 "strip_size_kb": 0, 00:13:28.260 "state": "online", 00:13:28.260 "raid_level": "raid1", 00:13:28.260 "superblock": true, 00:13:28.260 "num_base_bdevs": 2, 00:13:28.260 "num_base_bdevs_discovered": 2, 00:13:28.260 "num_base_bdevs_operational": 2, 00:13:28.260 "base_bdevs_list": [ 00:13:28.260 { 00:13:28.260 "name": "spare", 00:13:28.260 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:28.260 "is_configured": true, 00:13:28.260 "data_offset": 2048, 00:13:28.260 "data_size": 63488 00:13:28.260 }, 00:13:28.260 { 00:13:28.260 "name": "BaseBdev2", 00:13:28.260 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:28.260 "is_configured": true, 00:13:28.260 "data_offset": 2048, 00:13:28.260 "data_size": 63488 00:13:28.260 } 00:13:28.260 ] 00:13:28.260 }' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.260 "name": "raid_bdev1", 00:13:28.260 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:28.260 "strip_size_kb": 0, 00:13:28.260 "state": "online", 00:13:28.260 "raid_level": "raid1", 00:13:28.260 "superblock": true, 00:13:28.260 "num_base_bdevs": 2, 00:13:28.260 "num_base_bdevs_discovered": 2, 00:13:28.260 "num_base_bdevs_operational": 2, 00:13:28.260 "base_bdevs_list": [ 00:13:28.260 { 00:13:28.260 "name": "spare", 00:13:28.260 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:28.260 "is_configured": true, 00:13:28.260 "data_offset": 2048, 00:13:28.260 "data_size": 63488 00:13:28.260 }, 00:13:28.260 { 00:13:28.260 "name": "BaseBdev2", 00:13:28.260 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:28.260 "is_configured": true, 00:13:28.260 "data_offset": 2048, 00:13:28.260 "data_size": 63488 00:13:28.260 } 00:13:28.260 ] 00:13:28.260 }' 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.260 11:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.828 [2024-11-04 11:45:54.128052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.828 [2024-11-04 11:45:54.128180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.828 [2024-11-04 11:45:54.128293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.828 [2024-11-04 11:45:54.128457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.828 [2024-11-04 11:45:54.128519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.828 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:29.087 /dev/nbd0 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.087 1+0 records in 00:13:29.087 1+0 records out 00:13:29.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353243 s, 11.6 MB/s 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.087 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.346 /dev/nbd1 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.346 1+0 records in 00:13:29.346 1+0 records out 00:13:29.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271267 s, 15.1 MB/s 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.346 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.605 11:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.605 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.864 [2024-11-04 11:45:55.363938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.864 [2024-11-04 11:45:55.364013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.864 [2024-11-04 11:45:55.364042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:29.864 [2024-11-04 11:45:55.364052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.864 [2024-11-04 11:45:55.366632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.864 [2024-11-04 11:45:55.366674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.864 [2024-11-04 11:45:55.366784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:29.864 [2024-11-04 11:45:55.366839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.864 [2024-11-04 11:45:55.367010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.864 spare 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.864 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.123 [2024-11-04 11:45:55.466923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:30.123 [2024-11-04 11:45:55.466988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.123 [2024-11-04 11:45:55.467330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:30.123 [2024-11-04 11:45:55.467574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:30.123 [2024-11-04 11:45:55.467590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:30.123 [2024-11-04 11:45:55.467818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.123 "name": "raid_bdev1", 00:13:30.123 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:30.123 "strip_size_kb": 0, 00:13:30.123 "state": "online", 00:13:30.123 "raid_level": "raid1", 00:13:30.123 "superblock": true, 00:13:30.123 "num_base_bdevs": 2, 00:13:30.123 "num_base_bdevs_discovered": 2, 00:13:30.123 "num_base_bdevs_operational": 2, 00:13:30.123 "base_bdevs_list": [ 00:13:30.123 { 00:13:30.123 "name": "spare", 00:13:30.123 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:30.123 "is_configured": true, 00:13:30.123 "data_offset": 2048, 00:13:30.123 "data_size": 63488 00:13:30.123 }, 00:13:30.123 { 00:13:30.123 "name": "BaseBdev2", 00:13:30.123 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:30.123 "is_configured": true, 00:13:30.123 "data_offset": 2048, 00:13:30.123 "data_size": 63488 00:13:30.123 } 00:13:30.123 ] 00:13:30.123 }' 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.123 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.382 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.641 11:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.641 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.641 "name": "raid_bdev1", 00:13:30.641 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:30.641 "strip_size_kb": 0, 00:13:30.641 "state": "online", 00:13:30.641 "raid_level": "raid1", 00:13:30.641 "superblock": true, 00:13:30.641 "num_base_bdevs": 2, 00:13:30.641 "num_base_bdevs_discovered": 2, 00:13:30.641 "num_base_bdevs_operational": 2, 00:13:30.641 "base_bdevs_list": [ 00:13:30.641 { 00:13:30.641 "name": "spare", 00:13:30.641 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:30.641 "is_configured": true, 00:13:30.641 "data_offset": 2048, 00:13:30.641 "data_size": 63488 00:13:30.641 }, 00:13:30.641 { 00:13:30.641 "name": "BaseBdev2", 00:13:30.641 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:30.641 "is_configured": true, 00:13:30.641 "data_offset": 2048, 00:13:30.641 "data_size": 63488 00:13:30.641 } 00:13:30.641 ] 00:13:30.641 }' 00:13:30.641 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.641 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.641 11:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.641 [2024-11-04 11:45:56.070847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.641 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.642 "name": "raid_bdev1", 00:13:30.642 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:30.642 "strip_size_kb": 0, 00:13:30.642 "state": "online", 00:13:30.642 "raid_level": "raid1", 00:13:30.642 "superblock": true, 00:13:30.642 "num_base_bdevs": 2, 00:13:30.642 "num_base_bdevs_discovered": 1, 00:13:30.642 "num_base_bdevs_operational": 1, 00:13:30.642 "base_bdevs_list": [ 00:13:30.642 { 00:13:30.642 "name": null, 00:13:30.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.642 "is_configured": false, 00:13:30.642 "data_offset": 0, 00:13:30.642 "data_size": 63488 00:13:30.642 }, 00:13:30.642 { 00:13:30.642 "name": "BaseBdev2", 00:13:30.642 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:30.642 "is_configured": true, 00:13:30.642 "data_offset": 2048, 00:13:30.642 "data_size": 63488 00:13:30.642 } 00:13:30.642 ] 00:13:30.642 }' 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.642 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:31.209 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.209 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.209 [2024-11-04 11:45:56.538084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.209 [2024-11-04 11:45:56.538354] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.209 [2024-11-04 11:45:56.538436] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:31.209 [2024-11-04 11:45:56.538557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.209 [2024-11-04 11:45:56.555034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:31.209 11:45:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.209 11:45:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:31.209 [2024-11-04 11:45:56.557120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.142 "name": "raid_bdev1", 00:13:32.142 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:32.142 "strip_size_kb": 0, 00:13:32.142 "state": "online", 00:13:32.142 "raid_level": "raid1", 00:13:32.142 "superblock": true, 00:13:32.142 "num_base_bdevs": 2, 00:13:32.142 "num_base_bdevs_discovered": 2, 00:13:32.142 "num_base_bdevs_operational": 2, 00:13:32.142 "process": { 00:13:32.142 "type": "rebuild", 00:13:32.142 "target": "spare", 00:13:32.142 "progress": { 00:13:32.142 "blocks": 20480, 00:13:32.142 "percent": 32 00:13:32.142 } 00:13:32.142 }, 00:13:32.142 "base_bdevs_list": [ 00:13:32.142 { 00:13:32.142 "name": "spare", 00:13:32.142 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:32.142 "is_configured": true, 00:13:32.142 "data_offset": 2048, 00:13:32.142 "data_size": 63488 00:13:32.142 }, 00:13:32.142 { 00:13:32.142 "name": "BaseBdev2", 00:13:32.142 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:32.142 "is_configured": true, 00:13:32.142 "data_offset": 2048, 00:13:32.142 "data_size": 63488 00:13:32.142 } 00:13:32.142 ] 00:13:32.142 }' 00:13:32.142 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.401 [2024-11-04 11:45:57.724775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.401 [2024-11-04 11:45:57.763005] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:32.401 [2024-11-04 11:45:57.763107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.401 [2024-11-04 11:45:57.763123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.401 [2024-11-04 11:45:57.763135] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.401 "name": "raid_bdev1", 00:13:32.401 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:32.401 "strip_size_kb": 0, 00:13:32.401 "state": "online", 00:13:32.401 "raid_level": "raid1", 00:13:32.401 "superblock": true, 00:13:32.401 "num_base_bdevs": 2, 00:13:32.401 "num_base_bdevs_discovered": 1, 00:13:32.401 "num_base_bdevs_operational": 1, 00:13:32.401 "base_bdevs_list": [ 00:13:32.401 { 00:13:32.401 "name": null, 00:13:32.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.401 "is_configured": false, 00:13:32.401 "data_offset": 0, 00:13:32.401 "data_size": 63488 00:13:32.401 }, 00:13:32.401 { 00:13:32.401 "name": "BaseBdev2", 00:13:32.401 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:32.401 "is_configured": true, 00:13:32.401 "data_offset": 2048, 00:13:32.401 "data_size": 63488 00:13:32.401 } 00:13:32.401 ] 00:13:32.401 }' 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.401 11:45:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.967 11:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.967 11:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.967 11:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.967 [2024-11-04 11:45:58.219591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.967 [2024-11-04 11:45:58.219744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.967 [2024-11-04 11:45:58.219825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:32.967 [2024-11-04 11:45:58.219869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.967 [2024-11-04 11:45:58.220480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.967 [2024-11-04 11:45:58.220560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.967 [2024-11-04 11:45:58.220726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:32.967 [2024-11-04 11:45:58.220777] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.967 [2024-11-04 11:45:58.220836] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.967 [2024-11-04 11:45:58.220923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.967 [2024-11-04 11:45:58.238580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:32.967 spare 00:13:32.967 11:45:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.967 11:45:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:32.967 [2024-11-04 11:45:58.240974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.901 "name": "raid_bdev1", 00:13:33.901 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:33.901 "strip_size_kb": 0, 00:13:33.901 "state": "online", 00:13:33.901 "raid_level": "raid1", 00:13:33.901 "superblock": true, 00:13:33.901 "num_base_bdevs": 2, 00:13:33.901 "num_base_bdevs_discovered": 2, 00:13:33.901 "num_base_bdevs_operational": 2, 00:13:33.901 "process": { 00:13:33.901 "type": "rebuild", 00:13:33.901 "target": "spare", 00:13:33.901 "progress": { 00:13:33.901 "blocks": 20480, 00:13:33.901 "percent": 32 00:13:33.901 } 00:13:33.901 }, 00:13:33.901 "base_bdevs_list": [ 00:13:33.901 { 00:13:33.901 "name": "spare", 00:13:33.901 "uuid": "71d08b72-be95-57cd-976b-9373b1b8057e", 00:13:33.901 "is_configured": true, 00:13:33.901 "data_offset": 2048, 00:13:33.901 "data_size": 63488 00:13:33.901 }, 00:13:33.901 { 00:13:33.901 "name": "BaseBdev2", 00:13:33.901 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:33.901 "is_configured": true, 00:13:33.901 "data_offset": 2048, 00:13:33.901 "data_size": 63488 00:13:33.901 } 00:13:33.901 ] 00:13:33.901 }' 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.901 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.902 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.902 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.902 [2024-11-04 11:45:59.408284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.159 [2024-11-04 11:45:59.446966] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.159 [2024-11-04 11:45:59.447084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.159 [2024-11-04 11:45:59.447104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.159 [2024-11-04 11:45:59.447112] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.159 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.160 "name": "raid_bdev1", 00:13:34.160 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:34.160 "strip_size_kb": 0, 00:13:34.160 "state": "online", 00:13:34.160 "raid_level": "raid1", 00:13:34.160 "superblock": true, 00:13:34.160 "num_base_bdevs": 2, 00:13:34.160 "num_base_bdevs_discovered": 1, 00:13:34.160 "num_base_bdevs_operational": 1, 00:13:34.160 "base_bdevs_list": [ 00:13:34.160 { 00:13:34.160 "name": null, 00:13:34.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.160 "is_configured": false, 00:13:34.160 "data_offset": 0, 00:13:34.160 "data_size": 63488 00:13:34.160 }, 00:13:34.160 { 00:13:34.160 "name": "BaseBdev2", 00:13:34.160 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:34.160 "is_configured": true, 00:13:34.160 "data_offset": 2048, 00:13:34.160 "data_size": 63488 00:13:34.160 } 00:13:34.160 ] 00:13:34.160 }' 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.160 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.726 "name": "raid_bdev1", 00:13:34.726 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:34.726 "strip_size_kb": 0, 00:13:34.726 "state": "online", 00:13:34.726 "raid_level": "raid1", 00:13:34.726 "superblock": true, 00:13:34.726 "num_base_bdevs": 2, 00:13:34.726 "num_base_bdevs_discovered": 1, 00:13:34.726 "num_base_bdevs_operational": 1, 00:13:34.726 "base_bdevs_list": [ 00:13:34.726 { 00:13:34.726 "name": null, 00:13:34.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.726 "is_configured": false, 00:13:34.726 "data_offset": 0, 00:13:34.726 "data_size": 63488 00:13:34.726 }, 00:13:34.726 { 00:13:34.726 "name": "BaseBdev2", 00:13:34.726 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:34.726 "is_configured": true, 00:13:34.726 "data_offset": 2048, 00:13:34.726 "data_size": 63488 00:13:34.726 } 00:13:34.726 ] 00:13:34.726 }' 00:13:34.726 11:45:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.726 [2024-11-04 11:46:00.094211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.726 [2024-11-04 11:46:00.094319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.726 [2024-11-04 11:46:00.094386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:34.726 [2024-11-04 11:46:00.094456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.726 [2024-11-04 11:46:00.094993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.726 [2024-11-04 11:46:00.095019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.726 [2024-11-04 11:46:00.095108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:34.726 [2024-11-04 11:46:00.095123] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.726 [2024-11-04 11:46:00.095133] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.726 [2024-11-04 11:46:00.095144] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:34.726 BaseBdev1 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.726 11:46:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.661 "name": "raid_bdev1", 00:13:35.661 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:35.661 "strip_size_kb": 0, 00:13:35.661 "state": "online", 00:13:35.661 "raid_level": "raid1", 00:13:35.661 "superblock": true, 00:13:35.661 "num_base_bdevs": 2, 00:13:35.661 "num_base_bdevs_discovered": 1, 00:13:35.661 "num_base_bdevs_operational": 1, 00:13:35.661 "base_bdevs_list": [ 00:13:35.661 { 00:13:35.661 "name": null, 00:13:35.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.661 "is_configured": false, 00:13:35.661 "data_offset": 0, 00:13:35.661 "data_size": 63488 00:13:35.661 }, 00:13:35.661 { 00:13:35.661 "name": "BaseBdev2", 00:13:35.661 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:35.661 "is_configured": true, 00:13:35.661 "data_offset": 2048, 00:13:35.661 "data_size": 63488 00:13:35.661 } 00:13:35.661 ] 00:13:35.661 }' 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.661 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.228 "name": "raid_bdev1", 00:13:36.228 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:36.228 "strip_size_kb": 0, 00:13:36.228 "state": "online", 00:13:36.228 "raid_level": "raid1", 00:13:36.228 "superblock": true, 00:13:36.228 "num_base_bdevs": 2, 00:13:36.228 "num_base_bdevs_discovered": 1, 00:13:36.228 "num_base_bdevs_operational": 1, 00:13:36.228 "base_bdevs_list": [ 00:13:36.228 { 00:13:36.228 "name": null, 00:13:36.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.228 "is_configured": false, 00:13:36.228 "data_offset": 0, 00:13:36.228 "data_size": 63488 00:13:36.228 }, 00:13:36.228 { 00:13:36.228 "name": "BaseBdev2", 00:13:36.228 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:36.228 "is_configured": true, 00:13:36.228 "data_offset": 2048, 00:13:36.228 "data_size": 63488 00:13:36.228 } 00:13:36.228 ] 00:13:36.228 }' 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.228 [2024-11-04 11:46:01.695616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.228 [2024-11-04 11:46:01.695803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:36.228 [2024-11-04 11:46:01.695822] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:36.228 request: 00:13:36.228 { 00:13:36.228 "base_bdev": "BaseBdev1", 00:13:36.228 "raid_bdev": "raid_bdev1", 00:13:36.228 "method": "bdev_raid_add_base_bdev", 00:13:36.228 "req_id": 1 00:13:36.228 } 00:13:36.228 Got JSON-RPC error response 00:13:36.228 response: 00:13:36.228 { 00:13:36.228 "code": -22, 00:13:36.228 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:36.228 } 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.228 11:46:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.198 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.456 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.456 11:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.456 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.457 11:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.457 11:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.457 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.457 "name": "raid_bdev1", 00:13:37.457 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:37.457 "strip_size_kb": 0, 00:13:37.457 "state": "online", 00:13:37.457 "raid_level": "raid1", 00:13:37.457 "superblock": true, 00:13:37.457 "num_base_bdevs": 2, 00:13:37.457 "num_base_bdevs_discovered": 1, 00:13:37.457 "num_base_bdevs_operational": 1, 00:13:37.457 "base_bdevs_list": [ 00:13:37.457 { 00:13:37.457 "name": null, 00:13:37.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.457 "is_configured": false, 00:13:37.457 "data_offset": 0, 00:13:37.457 "data_size": 63488 00:13:37.457 }, 00:13:37.457 { 00:13:37.457 "name": "BaseBdev2", 00:13:37.457 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:37.457 "is_configured": true, 00:13:37.457 "data_offset": 2048, 00:13:37.457 "data_size": 63488 00:13:37.457 } 00:13:37.457 ] 00:13:37.457 }' 00:13:37.457 11:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.457 11:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.715 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.975 "name": "raid_bdev1", 00:13:37.975 "uuid": "dd542c90-3bae-4f05-b371-20636c7bf6ac", 00:13:37.975 "strip_size_kb": 0, 00:13:37.975 "state": "online", 00:13:37.975 "raid_level": "raid1", 00:13:37.975 "superblock": true, 00:13:37.975 "num_base_bdevs": 2, 00:13:37.975 "num_base_bdevs_discovered": 1, 00:13:37.975 "num_base_bdevs_operational": 1, 00:13:37.975 "base_bdevs_list": [ 00:13:37.975 { 00:13:37.975 "name": null, 00:13:37.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.975 "is_configured": false, 00:13:37.975 "data_offset": 0, 00:13:37.975 "data_size": 63488 00:13:37.975 }, 00:13:37.975 { 00:13:37.975 "name": "BaseBdev2", 00:13:37.975 "uuid": "6f74ea76-a89d-5365-9b5e-a6492a2cec00", 00:13:37.975 "is_configured": true, 00:13:37.975 "data_offset": 2048, 00:13:37.975 "data_size": 63488 00:13:37.975 } 00:13:37.975 ] 00:13:37.975 }' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75959 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75959 ']' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75959 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75959 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75959' 00:13:37.975 killing process with pid 75959 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75959 00:13:37.975 Received shutdown signal, test time was about 60.000000 seconds 00:13:37.975 00:13:37.975 Latency(us) 00:13:37.975 [2024-11-04T11:46:03.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.975 [2024-11-04T11:46:03.497Z] =================================================================================================================== 00:13:37.975 [2024-11-04T11:46:03.497Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.975 [2024-11-04 11:46:03.379126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.975 [2024-11-04 11:46:03.379270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.975 11:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75959 00:13:37.975 [2024-11-04 11:46:03.379325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.975 [2024-11-04 11:46:03.379338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:38.233 [2024-11-04 11:46:03.746595] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.611 11:46:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:39.611 00:13:39.611 real 0m23.868s 00:13:39.611 user 0m28.993s 00:13:39.611 sys 0m3.737s 00:13:39.611 ************************************ 00:13:39.611 END TEST raid_rebuild_test_sb 00:13:39.611 ************************************ 00:13:39.611 11:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.611 11:46:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.611 11:46:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:39.611 11:46:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:39.611 11:46:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.611 11:46:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.871 ************************************ 00:13:39.871 START TEST raid_rebuild_test_io 00:13:39.871 ************************************ 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76702 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76702 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76702 ']' 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.871 11:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.871 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:39.871 Zero copy mechanism will not be used. 00:13:39.871 [2024-11-04 11:46:05.249272] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:13:39.871 [2024-11-04 11:46:05.249434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76702 ] 00:13:40.131 [2024-11-04 11:46:05.414361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.131 [2024-11-04 11:46:05.563172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.391 [2024-11-04 11:46:05.801105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.391 [2024-11-04 11:46:05.801156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.959 BaseBdev1_malloc 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.959 [2024-11-04 11:46:06.255239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:40.959 [2024-11-04 11:46:06.255315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.959 [2024-11-04 11:46:06.255343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:40.959 [2024-11-04 11:46:06.255357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.959 [2024-11-04 11:46:06.257803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.959 [2024-11-04 11:46:06.257847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:40.959 BaseBdev1 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.959 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.959 BaseBdev2_malloc 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 [2024-11-04 11:46:06.316954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:40.960 [2024-11-04 11:46:06.317028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.960 [2024-11-04 11:46:06.317050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:40.960 [2024-11-04 11:46:06.317062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.960 [2024-11-04 11:46:06.319363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.960 [2024-11-04 11:46:06.319414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:40.960 BaseBdev2 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 spare_malloc 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 spare_delay 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 [2024-11-04 11:46:06.396739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:40.960 [2024-11-04 11:46:06.396807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.960 [2024-11-04 11:46:06.396830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:40.960 [2024-11-04 11:46:06.396842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.960 [2024-11-04 11:46:06.399013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.960 [2024-11-04 11:46:06.399051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:40.960 spare 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 [2024-11-04 11:46:06.408752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.960 [2024-11-04 11:46:06.410645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.960 [2024-11-04 11:46:06.410730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:40.960 [2024-11-04 11:46:06.410743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:40.960 [2024-11-04 11:46:06.410989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:40.960 [2024-11-04 11:46:06.411134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:40.960 [2024-11-04 11:46:06.411145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:40.960 [2024-11-04 11:46:06.411293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.960 "name": "raid_bdev1", 00:13:40.960 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:40.960 "strip_size_kb": 0, 00:13:40.960 "state": "online", 00:13:40.960 "raid_level": "raid1", 00:13:40.960 "superblock": false, 00:13:40.960 "num_base_bdevs": 2, 00:13:40.960 "num_base_bdevs_discovered": 2, 00:13:40.960 "num_base_bdevs_operational": 2, 00:13:40.960 "base_bdevs_list": [ 00:13:40.960 { 00:13:40.960 "name": "BaseBdev1", 00:13:40.960 "uuid": "6894ec89-94cd-5cc6-bef1-efd9d923e9d8", 00:13:40.960 "is_configured": true, 00:13:40.960 "data_offset": 0, 00:13:40.960 "data_size": 65536 00:13:40.960 }, 00:13:40.960 { 00:13:40.960 "name": "BaseBdev2", 00:13:40.960 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:40.960 "is_configured": true, 00:13:40.960 "data_offset": 0, 00:13:40.960 "data_size": 65536 00:13:40.960 } 00:13:40.960 ] 00:13:40.960 }' 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.960 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 [2024-11-04 11:46:06.848328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.525 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.526 [2024-11-04 11:46:06.939880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.526 11:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.526 11:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.526 "name": "raid_bdev1", 00:13:41.526 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:41.526 "strip_size_kb": 0, 00:13:41.526 "state": "online", 00:13:41.526 "raid_level": "raid1", 00:13:41.526 "superblock": false, 00:13:41.526 "num_base_bdevs": 2, 00:13:41.526 "num_base_bdevs_discovered": 1, 00:13:41.526 "num_base_bdevs_operational": 1, 00:13:41.526 "base_bdevs_list": [ 00:13:41.526 { 00:13:41.526 "name": null, 00:13:41.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.526 "is_configured": false, 00:13:41.526 "data_offset": 0, 00:13:41.526 "data_size": 65536 00:13:41.526 }, 00:13:41.526 { 00:13:41.526 "name": "BaseBdev2", 00:13:41.526 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:41.526 "is_configured": true, 00:13:41.526 "data_offset": 0, 00:13:41.526 "data_size": 65536 00:13:41.526 } 00:13:41.526 ] 00:13:41.526 }' 00:13:41.526 11:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.526 11:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.783 [2024-11-04 11:46:07.064509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:41.783 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:41.783 Zero copy mechanism will not be used. 00:13:41.783 Running I/O for 60 seconds... 00:13:42.041 11:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.041 11:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.041 11:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.041 [2024-11-04 11:46:07.387910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.041 11:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.041 11:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.041 [2024-11-04 11:46:07.441032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.041 [2024-11-04 11:46:07.443179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.299 [2024-11-04 11:46:07.565574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.299 [2024-11-04 11:46:07.566013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:42.299 [2024-11-04 11:46:07.775972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.299 [2024-11-04 11:46:07.776338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:42.819 194.00 IOPS, 582.00 MiB/s [2024-11-04T11:46:08.341Z] [2024-11-04 11:46:08.105676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.819 [2024-11-04 11:46:08.106250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:42.819 [2024-11-04 11:46:08.317856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:42.819 [2024-11-04 11:46:08.318327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.078 "name": "raid_bdev1", 00:13:43.078 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:43.078 "strip_size_kb": 0, 00:13:43.078 "state": "online", 00:13:43.078 "raid_level": "raid1", 00:13:43.078 "superblock": false, 00:13:43.078 "num_base_bdevs": 2, 00:13:43.078 "num_base_bdevs_discovered": 2, 00:13:43.078 "num_base_bdevs_operational": 2, 00:13:43.078 "process": { 00:13:43.078 "type": "rebuild", 00:13:43.078 "target": "spare", 00:13:43.078 "progress": { 00:13:43.078 "blocks": 10240, 00:13:43.078 "percent": 15 00:13:43.078 } 00:13:43.078 }, 00:13:43.078 "base_bdevs_list": [ 00:13:43.078 { 00:13:43.078 "name": "spare", 00:13:43.078 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:43.078 "is_configured": true, 00:13:43.078 "data_offset": 0, 00:13:43.078 "data_size": 65536 00:13:43.078 }, 00:13:43.078 { 00:13:43.078 "name": "BaseBdev2", 00:13:43.078 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:43.078 "is_configured": true, 00:13:43.078 "data_offset": 0, 00:13:43.078 "data_size": 65536 00:13:43.078 } 00:13:43.078 ] 00:13:43.078 }' 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.078 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.078 [2024-11-04 11:46:08.572570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.337 [2024-11-04 11:46:08.759945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.338 [2024-11-04 11:46:08.765753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.338 [2024-11-04 11:46:08.765840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.338 [2024-11-04 11:46:08.765865] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.338 [2024-11-04 11:46:08.813550] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.338 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.597 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.597 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.597 "name": "raid_bdev1", 00:13:43.597 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:43.597 "strip_size_kb": 0, 00:13:43.597 "state": "online", 00:13:43.597 "raid_level": "raid1", 00:13:43.597 "superblock": false, 00:13:43.597 "num_base_bdevs": 2, 00:13:43.597 "num_base_bdevs_discovered": 1, 00:13:43.597 "num_base_bdevs_operational": 1, 00:13:43.597 "base_bdevs_list": [ 00:13:43.597 { 00:13:43.597 "name": null, 00:13:43.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.597 "is_configured": false, 00:13:43.597 "data_offset": 0, 00:13:43.597 "data_size": 65536 00:13:43.597 }, 00:13:43.597 { 00:13:43.597 "name": "BaseBdev2", 00:13:43.597 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:43.597 "is_configured": true, 00:13:43.597 "data_offset": 0, 00:13:43.597 "data_size": 65536 00:13:43.597 } 00:13:43.597 ] 00:13:43.597 }' 00:13:43.597 11:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.597 11:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.856 163.00 IOPS, 489.00 MiB/s [2024-11-04T11:46:09.378Z] 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.856 "name": "raid_bdev1", 00:13:43.856 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:43.856 "strip_size_kb": 0, 00:13:43.856 "state": "online", 00:13:43.856 "raid_level": "raid1", 00:13:43.856 "superblock": false, 00:13:43.856 "num_base_bdevs": 2, 00:13:43.856 "num_base_bdevs_discovered": 1, 00:13:43.856 "num_base_bdevs_operational": 1, 00:13:43.856 "base_bdevs_list": [ 00:13:43.856 { 00:13:43.856 "name": null, 00:13:43.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.856 "is_configured": false, 00:13:43.856 "data_offset": 0, 00:13:43.856 "data_size": 65536 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "name": "BaseBdev2", 00:13:43.856 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:43.856 "is_configured": true, 00:13:43.856 "data_offset": 0, 00:13:43.856 "data_size": 65536 00:13:43.856 } 00:13:43.856 ] 00:13:43.856 }' 00:13:43.856 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.119 [2024-11-04 11:46:09.462655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.119 11:46:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.119 [2024-11-04 11:46:09.518591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:44.119 [2024-11-04 11:46:09.521225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.119 [2024-11-04 11:46:09.637163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.119 [2024-11-04 11:46:09.638239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:44.379 [2024-11-04 11:46:09.763053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.379 [2024-11-04 11:46:09.763665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:44.653 168.00 IOPS, 504.00 MiB/s [2024-11-04T11:46:10.175Z] [2024-11-04 11:46:10.102188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:44.912 [2024-11-04 11:46:10.314237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:44.912 [2024-11-04 11:46:10.314801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.171 "name": "raid_bdev1", 00:13:45.171 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:45.171 "strip_size_kb": 0, 00:13:45.171 "state": "online", 00:13:45.171 "raid_level": "raid1", 00:13:45.171 "superblock": false, 00:13:45.171 "num_base_bdevs": 2, 00:13:45.171 "num_base_bdevs_discovered": 2, 00:13:45.171 "num_base_bdevs_operational": 2, 00:13:45.171 "process": { 00:13:45.171 "type": "rebuild", 00:13:45.171 "target": "spare", 00:13:45.171 "progress": { 00:13:45.171 "blocks": 10240, 00:13:45.171 "percent": 15 00:13:45.171 } 00:13:45.171 }, 00:13:45.171 "base_bdevs_list": [ 00:13:45.171 { 00:13:45.171 "name": "spare", 00:13:45.171 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:45.171 "is_configured": true, 00:13:45.171 "data_offset": 0, 00:13:45.171 "data_size": 65536 00:13:45.171 }, 00:13:45.171 { 00:13:45.171 "name": "BaseBdev2", 00:13:45.171 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:45.171 "is_configured": true, 00:13:45.171 "data_offset": 0, 00:13:45.171 "data_size": 65536 00:13:45.171 } 00:13:45.171 ] 00:13:45.171 }' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.171 [2024-11-04 11:46:10.659757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.171 11:46:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.431 "name": "raid_bdev1", 00:13:45.431 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:45.431 "strip_size_kb": 0, 00:13:45.431 "state": "online", 00:13:45.431 "raid_level": "raid1", 00:13:45.431 "superblock": false, 00:13:45.431 "num_base_bdevs": 2, 00:13:45.431 "num_base_bdevs_discovered": 2, 00:13:45.431 "num_base_bdevs_operational": 2, 00:13:45.431 "process": { 00:13:45.431 "type": "rebuild", 00:13:45.431 "target": "spare", 00:13:45.431 "progress": { 00:13:45.431 "blocks": 14336, 00:13:45.431 "percent": 21 00:13:45.431 } 00:13:45.431 }, 00:13:45.431 "base_bdevs_list": [ 00:13:45.431 { 00:13:45.431 "name": "spare", 00:13:45.431 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:45.431 "is_configured": true, 00:13:45.431 "data_offset": 0, 00:13:45.431 "data_size": 65536 00:13:45.431 }, 00:13:45.431 { 00:13:45.431 "name": "BaseBdev2", 00:13:45.431 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:45.431 "is_configured": true, 00:13:45.431 "data_offset": 0, 00:13:45.431 "data_size": 65536 00:13:45.431 } 00:13:45.431 ] 00:13:45.431 }' 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.431 [2024-11-04 11:46:10.794866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.431 11:46:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.689 138.00 IOPS, 414.00 MiB/s [2024-11-04T11:46:11.211Z] [2024-11-04 11:46:11.174659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:45.948 [2024-11-04 11:46:11.299616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.514 [2024-11-04 11:46:11.857920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.514 [2024-11-04 11:46:11.859054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.514 "name": "raid_bdev1", 00:13:46.514 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:46.514 "strip_size_kb": 0, 00:13:46.514 "state": "online", 00:13:46.514 "raid_level": "raid1", 00:13:46.514 "superblock": false, 00:13:46.514 "num_base_bdevs": 2, 00:13:46.514 "num_base_bdevs_discovered": 2, 00:13:46.514 "num_base_bdevs_operational": 2, 00:13:46.514 "process": { 00:13:46.514 "type": "rebuild", 00:13:46.514 "target": "spare", 00:13:46.514 "progress": { 00:13:46.514 "blocks": 30720, 00:13:46.514 "percent": 46 00:13:46.514 } 00:13:46.514 }, 00:13:46.514 "base_bdevs_list": [ 00:13:46.514 { 00:13:46.514 "name": "spare", 00:13:46.514 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:46.514 "is_configured": true, 00:13:46.514 "data_offset": 0, 00:13:46.514 "data_size": 65536 00:13:46.514 }, 00:13:46.514 { 00:13:46.514 "name": "BaseBdev2", 00:13:46.514 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:46.514 "is_configured": true, 00:13:46.514 "data_offset": 0, 00:13:46.514 "data_size": 65536 00:13:46.514 } 00:13:46.514 ] 00:13:46.514 }' 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.514 11:46:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.772 121.80 IOPS, 365.40 MiB/s [2024-11-04T11:46:12.294Z] [2024-11-04 11:46:12.064574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:47.031 [2024-11-04 11:46:12.472447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:47.290 [2024-11-04 11:46:12.693788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.290 [2024-11-04 11:46:12.694801] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.549 11:46:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.549 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.549 "name": "raid_bdev1", 00:13:47.549 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:47.549 "strip_size_kb": 0, 00:13:47.549 "state": "online", 00:13:47.549 "raid_level": "raid1", 00:13:47.549 "superblock": false, 00:13:47.549 "num_base_bdevs": 2, 00:13:47.549 "num_base_bdevs_discovered": 2, 00:13:47.549 "num_base_bdevs_operational": 2, 00:13:47.549 "process": { 00:13:47.549 "type": "rebuild", 00:13:47.549 "target": "spare", 00:13:47.549 "progress": { 00:13:47.549 "blocks": 47104, 00:13:47.549 "percent": 71 00:13:47.549 } 00:13:47.549 }, 00:13:47.549 "base_bdevs_list": [ 00:13:47.549 { 00:13:47.549 "name": "spare", 00:13:47.549 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:47.549 "is_configured": true, 00:13:47.549 "data_offset": 0, 00:13:47.549 "data_size": 65536 00:13:47.549 }, 00:13:47.549 { 00:13:47.549 "name": "BaseBdev2", 00:13:47.549 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:47.549 "is_configured": true, 00:13:47.549 "data_offset": 0, 00:13:47.549 "data_size": 65536 00:13:47.549 } 00:13:47.549 ] 00:13:47.549 }' 00:13:47.549 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.549 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.549 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.808 109.17 IOPS, 327.50 MiB/s [2024-11-04T11:46:13.330Z] 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.808 11:46:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.808 [2024-11-04 11:46:13.127215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:48.066 [2024-11-04 11:46:13.471122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:48.325 [2024-11-04 11:46:13.700378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:48.597 96.86 IOPS, 290.57 MiB/s [2024-11-04T11:46:14.119Z] 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.597 11:46:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.856 "name": "raid_bdev1", 00:13:48.856 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:48.856 "strip_size_kb": 0, 00:13:48.856 "state": "online", 00:13:48.856 "raid_level": "raid1", 00:13:48.856 "superblock": false, 00:13:48.856 "num_base_bdevs": 2, 00:13:48.856 "num_base_bdevs_discovered": 2, 00:13:48.856 "num_base_bdevs_operational": 2, 00:13:48.856 "process": { 00:13:48.856 "type": "rebuild", 00:13:48.856 "target": "spare", 00:13:48.856 "progress": { 00:13:48.856 "blocks": 63488, 00:13:48.856 "percent": 96 00:13:48.856 } 00:13:48.856 }, 00:13:48.856 "base_bdevs_list": [ 00:13:48.856 { 00:13:48.856 "name": "spare", 00:13:48.856 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:48.856 "is_configured": true, 00:13:48.856 "data_offset": 0, 00:13:48.856 "data_size": 65536 00:13:48.856 }, 00:13:48.856 { 00:13:48.856 "name": "BaseBdev2", 00:13:48.856 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:48.856 "is_configured": true, 00:13:48.856 "data_offset": 0, 00:13:48.856 "data_size": 65536 00:13:48.856 } 00:13:48.856 ] 00:13:48.856 }' 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.856 [2024-11-04 11:46:14.142936] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.856 11:46:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:48.856 [2024-11-04 11:46:14.250528] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:48.856 [2024-11-04 11:46:14.254612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.791 89.00 IOPS, 267.00 MiB/s [2024-11-04T11:46:15.313Z] 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.791 "name": "raid_bdev1", 00:13:49.791 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:49.791 "strip_size_kb": 0, 00:13:49.791 "state": "online", 00:13:49.791 "raid_level": "raid1", 00:13:49.791 "superblock": false, 00:13:49.791 "num_base_bdevs": 2, 00:13:49.791 "num_base_bdevs_discovered": 2, 00:13:49.791 "num_base_bdevs_operational": 2, 00:13:49.791 "base_bdevs_list": [ 00:13:49.791 { 00:13:49.791 "name": "spare", 00:13:49.791 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:49.791 "is_configured": true, 00:13:49.791 "data_offset": 0, 00:13:49.791 "data_size": 65536 00:13:49.791 }, 00:13:49.791 { 00:13:49.791 "name": "BaseBdev2", 00:13:49.791 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:49.791 "is_configured": true, 00:13:49.791 "data_offset": 0, 00:13:49.791 "data_size": 65536 00:13:49.791 } 00:13:49.791 ] 00:13:49.791 }' 00:13:49.791 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.050 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.051 "name": "raid_bdev1", 00:13:50.051 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:50.051 "strip_size_kb": 0, 00:13:50.051 "state": "online", 00:13:50.051 "raid_level": "raid1", 00:13:50.051 "superblock": false, 00:13:50.051 "num_base_bdevs": 2, 00:13:50.051 "num_base_bdevs_discovered": 2, 00:13:50.051 "num_base_bdevs_operational": 2, 00:13:50.051 "base_bdevs_list": [ 00:13:50.051 { 00:13:50.051 "name": "spare", 00:13:50.051 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:50.051 "is_configured": true, 00:13:50.051 "data_offset": 0, 00:13:50.051 "data_size": 65536 00:13:50.051 }, 00:13:50.051 { 00:13:50.051 "name": "BaseBdev2", 00:13:50.051 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:50.051 "is_configured": true, 00:13:50.051 "data_offset": 0, 00:13:50.051 "data_size": 65536 00:13:50.051 } 00:13:50.051 ] 00:13:50.051 }' 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.051 "name": "raid_bdev1", 00:13:50.051 "uuid": "c399b22a-fe35-470a-a32a-d4d659de49e5", 00:13:50.051 "strip_size_kb": 0, 00:13:50.051 "state": "online", 00:13:50.051 "raid_level": "raid1", 00:13:50.051 "superblock": false, 00:13:50.051 "num_base_bdevs": 2, 00:13:50.051 "num_base_bdevs_discovered": 2, 00:13:50.051 "num_base_bdevs_operational": 2, 00:13:50.051 "base_bdevs_list": [ 00:13:50.051 { 00:13:50.051 "name": "spare", 00:13:50.051 "uuid": "0cbf0608-c6ef-5523-b92b-a18811c080e2", 00:13:50.051 "is_configured": true, 00:13:50.051 "data_offset": 0, 00:13:50.051 "data_size": 65536 00:13:50.051 }, 00:13:50.051 { 00:13:50.051 "name": "BaseBdev2", 00:13:50.051 "uuid": "4e785a2b-b0dc-5873-8c2f-0bcb9c4bb963", 00:13:50.051 "is_configured": true, 00:13:50.051 "data_offset": 0, 00:13:50.051 "data_size": 65536 00:13:50.051 } 00:13:50.051 ] 00:13:50.051 }' 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.051 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 11:46:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.618 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.618 11:46:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 [2024-11-04 11:46:15.993030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.618 [2024-11-04 11:46:15.993123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.618 00:13:50.618 Latency(us) 00:13:50.618 [2024-11-04T11:46:16.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.618 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:50.618 raid_bdev1 : 8.96 83.23 249.68 0.00 0.00 16507.19 332.69 118136.51 00:13:50.618 [2024-11-04T11:46:16.140Z] =================================================================================================================== 00:13:50.618 [2024-11-04T11:46:16.140Z] Total : 83.23 249.68 0.00 0.00 16507.19 332.69 118136.51 00:13:50.618 [2024-11-04 11:46:16.041059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.618 [2024-11-04 11:46:16.041132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.618 [2024-11-04 11:46:16.041247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.618 [2024-11-04 11:46:16.041260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:50.618 { 00:13:50.618 "results": [ 00:13:50.618 { 00:13:50.618 "job": "raid_bdev1", 00:13:50.618 "core_mask": "0x1", 00:13:50.618 "workload": "randrw", 00:13:50.618 "percentage": 50, 00:13:50.618 "status": "finished", 00:13:50.618 "queue_depth": 2, 00:13:50.618 "io_size": 3145728, 00:13:50.618 "runtime": 8.96334, 00:13:50.618 "iops": 83.22790388404323, 00:13:50.618 "mibps": 249.6837116521297, 00:13:50.618 "io_failed": 0, 00:13:50.618 "io_timeout": 0, 00:13:50.618 "avg_latency_us": 16507.192413688143, 00:13:50.618 "min_latency_us": 332.6882096069869, 00:13:50.618 "max_latency_us": 118136.51004366812 00:13:50.618 } 00:13:50.618 ], 00:13:50.618 "core_count": 1 00:13:50.618 } 00:13:50.618 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.618 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:50.618 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.618 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.618 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.619 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:50.878 /dev/nbd0 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.878 1+0 records in 00:13:50.878 1+0 records out 00:13:50.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386133 s, 10.6 MB/s 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.878 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:51.137 /dev/nbd1 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.137 1+0 records in 00:13:51.137 1+0 records out 00:13:51.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537682 s, 7.6 MB/s 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.137 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.396 11:46:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.660 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76702 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76702 ']' 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76702 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76702 00:13:51.949 killing process with pid 76702 00:13:51.949 Received shutdown signal, test time was about 10.263703 seconds 00:13:51.949 00:13:51.949 Latency(us) 00:13:51.949 [2024-11-04T11:46:17.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.949 [2024-11-04T11:46:17.471Z] =================================================================================================================== 00:13:51.949 [2024-11-04T11:46:17.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76702' 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76702 00:13:51.949 [2024-11-04 11:46:17.310734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.949 11:46:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76702 00:13:52.209 [2024-11-04 11:46:17.563280] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:53.587 00:13:53.587 real 0m13.650s 00:13:53.587 user 0m17.071s 00:13:53.587 sys 0m1.571s 00:13:53.587 ************************************ 00:13:53.587 END TEST raid_rebuild_test_io 00:13:53.587 ************************************ 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.587 11:46:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:53.587 11:46:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:53.587 11:46:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.587 11:46:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.587 ************************************ 00:13:53.587 START TEST raid_rebuild_test_sb_io 00:13:53.587 ************************************ 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:53.587 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77097 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77097 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77097 ']' 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.588 11:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.588 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.588 Zero copy mechanism will not be used. 00:13:53.588 [2024-11-04 11:46:18.971173] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:13:53.588 [2024-11-04 11:46:18.971303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:13:53.847 [2024-11-04 11:46:19.148560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.847 [2024-11-04 11:46:19.273410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.106 [2024-11-04 11:46:19.494633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.106 [2024-11-04 11:46:19.494770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 BaseBdev1_malloc 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 [2024-11-04 11:46:19.932549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.672 [2024-11-04 11:46:19.932694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.672 [2024-11-04 11:46:19.932765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:54.672 [2024-11-04 11:46:19.932812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.672 [2024-11-04 11:46:19.935094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.672 [2024-11-04 11:46:19.935173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.672 BaseBdev1 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 BaseBdev2_malloc 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 [2024-11-04 11:46:19.991298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:54.672 [2024-11-04 11:46:19.991454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.672 [2024-11-04 11:46:19.991502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:54.672 [2024-11-04 11:46:19.991603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.672 [2024-11-04 11:46:19.994041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.672 [2024-11-04 11:46:19.994088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.672 BaseBdev2 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 spare_malloc 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 spare_delay 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.672 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.672 [2024-11-04 11:46:20.072066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.672 [2024-11-04 11:46:20.072219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.672 [2024-11-04 11:46:20.072276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:54.672 [2024-11-04 11:46:20.072328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.672 [2024-11-04 11:46:20.074684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.672 [2024-11-04 11:46:20.074772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.673 spare 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.673 [2024-11-04 11:46:20.084142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.673 [2024-11-04 11:46:20.086226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.673 [2024-11-04 11:46:20.086483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.673 [2024-11-04 11:46:20.086536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:54.673 [2024-11-04 11:46:20.086848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:54.673 [2024-11-04 11:46:20.087068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.673 [2024-11-04 11:46:20.087111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.673 [2024-11-04 11:46:20.087358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.673 "name": "raid_bdev1", 00:13:54.673 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:54.673 "strip_size_kb": 0, 00:13:54.673 "state": "online", 00:13:54.673 "raid_level": "raid1", 00:13:54.673 "superblock": true, 00:13:54.673 "num_base_bdevs": 2, 00:13:54.673 "num_base_bdevs_discovered": 2, 00:13:54.673 "num_base_bdevs_operational": 2, 00:13:54.673 "base_bdevs_list": [ 00:13:54.673 { 00:13:54.673 "name": "BaseBdev1", 00:13:54.673 "uuid": "8175672c-5d4e-5061-a716-6ab1545be66b", 00:13:54.673 "is_configured": true, 00:13:54.673 "data_offset": 2048, 00:13:54.673 "data_size": 63488 00:13:54.673 }, 00:13:54.673 { 00:13:54.673 "name": "BaseBdev2", 00:13:54.673 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:54.673 "is_configured": true, 00:13:54.673 "data_offset": 2048, 00:13:54.673 "data_size": 63488 00:13:54.673 } 00:13:54.673 ] 00:13:54.673 }' 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.673 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:55.246 [2024-11-04 11:46:20.587549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 [2024-11-04 11:46:20.687074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.246 "name": "raid_bdev1", 00:13:55.246 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:55.246 "strip_size_kb": 0, 00:13:55.246 "state": "online", 00:13:55.246 "raid_level": "raid1", 00:13:55.246 "superblock": true, 00:13:55.246 "num_base_bdevs": 2, 00:13:55.246 "num_base_bdevs_discovered": 1, 00:13:55.246 "num_base_bdevs_operational": 1, 00:13:55.246 "base_bdevs_list": [ 00:13:55.246 { 00:13:55.246 "name": null, 00:13:55.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.246 "is_configured": false, 00:13:55.246 "data_offset": 0, 00:13:55.246 "data_size": 63488 00:13:55.246 }, 00:13:55.246 { 00:13:55.246 "name": "BaseBdev2", 00:13:55.246 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:55.246 "is_configured": true, 00:13:55.246 "data_offset": 2048, 00:13:55.246 "data_size": 63488 00:13:55.246 } 00:13:55.246 ] 00:13:55.246 }' 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.246 11:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.505 [2024-11-04 11:46:20.783560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:55.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:55.505 Zero copy mechanism will not be used. 00:13:55.505 Running I/O for 60 seconds... 00:13:55.764 11:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.764 11:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.764 11:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 [2024-11-04 11:46:21.130059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.764 11:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.764 11:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:55.764 [2024-11-04 11:46:21.187421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:55.764 [2024-11-04 11:46:21.189515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.023 [2024-11-04 11:46:21.303599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:56.023 [2024-11-04 11:46:21.304215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:56.023 [2024-11-04 11:46:21.514513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.023 [2024-11-04 11:46:21.514834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.281 [2024-11-04 11:46:21.750016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:56.540 198.00 IOPS, 594.00 MiB/s [2024-11-04T11:46:22.062Z] [2024-11-04 11:46:21.853015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.798 [2024-11-04 11:46:22.104029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.798 [2024-11-04 11:46:22.226977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.798 "name": "raid_bdev1", 00:13:56.798 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:56.798 "strip_size_kb": 0, 00:13:56.798 "state": "online", 00:13:56.798 "raid_level": "raid1", 00:13:56.798 "superblock": true, 00:13:56.798 "num_base_bdevs": 2, 00:13:56.798 "num_base_bdevs_discovered": 2, 00:13:56.798 "num_base_bdevs_operational": 2, 00:13:56.798 "process": { 00:13:56.798 "type": "rebuild", 00:13:56.798 "target": "spare", 00:13:56.798 "progress": { 00:13:56.798 "blocks": 14336, 00:13:56.798 "percent": 22 00:13:56.798 } 00:13:56.798 }, 00:13:56.798 "base_bdevs_list": [ 00:13:56.798 { 00:13:56.798 "name": "spare", 00:13:56.798 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:13:56.798 "is_configured": true, 00:13:56.798 "data_offset": 2048, 00:13:56.798 "data_size": 63488 00:13:56.798 }, 00:13:56.798 { 00:13:56.798 "name": "BaseBdev2", 00:13:56.798 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:56.798 "is_configured": true, 00:13:56.798 "data_offset": 2048, 00:13:56.798 "data_size": 63488 00:13:56.798 } 00:13:56.798 ] 00:13:56.798 }' 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.798 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.799 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.057 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.057 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:57.057 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.057 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.057 [2024-11-04 11:46:22.336887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.057 [2024-11-04 11:46:22.544854] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:57.057 [2024-11-04 11:46:22.554299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.057 [2024-11-04 11:46:22.554465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.057 [2024-11-04 11:46:22.554502] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:57.316 [2024-11-04 11:46:22.603340] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.316 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.316 "name": "raid_bdev1", 00:13:57.316 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:57.316 "strip_size_kb": 0, 00:13:57.316 "state": "online", 00:13:57.316 "raid_level": "raid1", 00:13:57.316 "superblock": true, 00:13:57.316 "num_base_bdevs": 2, 00:13:57.316 "num_base_bdevs_discovered": 1, 00:13:57.316 "num_base_bdevs_operational": 1, 00:13:57.316 "base_bdevs_list": [ 00:13:57.316 { 00:13:57.316 "name": null, 00:13:57.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.317 "is_configured": false, 00:13:57.317 "data_offset": 0, 00:13:57.317 "data_size": 63488 00:13:57.317 }, 00:13:57.317 { 00:13:57.317 "name": "BaseBdev2", 00:13:57.317 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:57.317 "is_configured": true, 00:13:57.317 "data_offset": 2048, 00:13:57.317 "data_size": 63488 00:13:57.317 } 00:13:57.317 ] 00:13:57.317 }' 00:13:57.317 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.317 11:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.576 151.50 IOPS, 454.50 MiB/s [2024-11-04T11:46:23.098Z] 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.576 "name": "raid_bdev1", 00:13:57.576 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:57.576 "strip_size_kb": 0, 00:13:57.576 "state": "online", 00:13:57.576 "raid_level": "raid1", 00:13:57.576 "superblock": true, 00:13:57.576 "num_base_bdevs": 2, 00:13:57.576 "num_base_bdevs_discovered": 1, 00:13:57.576 "num_base_bdevs_operational": 1, 00:13:57.576 "base_bdevs_list": [ 00:13:57.576 { 00:13:57.576 "name": null, 00:13:57.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.576 "is_configured": false, 00:13:57.576 "data_offset": 0, 00:13:57.576 "data_size": 63488 00:13:57.576 }, 00:13:57.576 { 00:13:57.576 "name": "BaseBdev2", 00:13:57.576 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:57.576 "is_configured": true, 00:13:57.576 "data_offset": 2048, 00:13:57.576 "data_size": 63488 00:13:57.576 } 00:13:57.576 ] 00:13:57.576 }' 00:13:57.576 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.836 [2024-11-04 11:46:23.209000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.836 11:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:57.836 [2024-11-04 11:46:23.263391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:57.836 [2024-11-04 11:46:23.265455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.095 [2024-11-04 11:46:23.386071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.095 [2024-11-04 11:46:23.386819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:58.095 [2024-11-04 11:46:23.603462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.095 [2024-11-04 11:46:23.603796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:58.614 162.67 IOPS, 488.00 MiB/s [2024-11-04T11:46:24.137Z] [2024-11-04 11:46:23.940824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:58.875 [2024-11-04 11:46:24.162572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.875 "name": "raid_bdev1", 00:13:58.875 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:58.875 "strip_size_kb": 0, 00:13:58.875 "state": "online", 00:13:58.875 "raid_level": "raid1", 00:13:58.875 "superblock": true, 00:13:58.875 "num_base_bdevs": 2, 00:13:58.875 "num_base_bdevs_discovered": 2, 00:13:58.875 "num_base_bdevs_operational": 2, 00:13:58.875 "process": { 00:13:58.875 "type": "rebuild", 00:13:58.875 "target": "spare", 00:13:58.875 "progress": { 00:13:58.875 "blocks": 10240, 00:13:58.875 "percent": 16 00:13:58.875 } 00:13:58.875 }, 00:13:58.875 "base_bdevs_list": [ 00:13:58.875 { 00:13:58.875 "name": "spare", 00:13:58.875 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:13:58.875 "is_configured": true, 00:13:58.875 "data_offset": 2048, 00:13:58.875 "data_size": 63488 00:13:58.875 }, 00:13:58.875 { 00:13:58.875 "name": "BaseBdev2", 00:13:58.875 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:58.875 "is_configured": true, 00:13:58.875 "data_offset": 2048, 00:13:58.875 "data_size": 63488 00:13:58.875 } 00:13:58.875 ] 00:13:58.875 }' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:58.875 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=426 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.875 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.135 "name": "raid_bdev1", 00:13:59.135 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:13:59.135 "strip_size_kb": 0, 00:13:59.135 "state": "online", 00:13:59.135 "raid_level": "raid1", 00:13:59.135 "superblock": true, 00:13:59.135 "num_base_bdevs": 2, 00:13:59.135 "num_base_bdevs_discovered": 2, 00:13:59.135 "num_base_bdevs_operational": 2, 00:13:59.135 "process": { 00:13:59.135 "type": "rebuild", 00:13:59.135 "target": "spare", 00:13:59.135 "progress": { 00:13:59.135 "blocks": 12288, 00:13:59.135 "percent": 19 00:13:59.135 } 00:13:59.135 }, 00:13:59.135 "base_bdevs_list": [ 00:13:59.135 { 00:13:59.135 "name": "spare", 00:13:59.135 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:13:59.135 "is_configured": true, 00:13:59.135 "data_offset": 2048, 00:13:59.135 "data_size": 63488 00:13:59.135 }, 00:13:59.135 { 00:13:59.135 "name": "BaseBdev2", 00:13:59.135 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:13:59.135 "is_configured": true, 00:13:59.135 "data_offset": 2048, 00:13:59.135 "data_size": 63488 00:13:59.135 } 00:13:59.135 ] 00:13:59.135 }' 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.135 [2024-11-04 11:46:24.481858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.135 11:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.653 145.75 IOPS, 437.25 MiB/s [2024-11-04T11:46:25.175Z] [2024-11-04 11:46:25.014345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.221 "name": "raid_bdev1", 00:14:00.221 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:00.221 "strip_size_kb": 0, 00:14:00.221 "state": "online", 00:14:00.221 "raid_level": "raid1", 00:14:00.221 "superblock": true, 00:14:00.221 "num_base_bdevs": 2, 00:14:00.221 "num_base_bdevs_discovered": 2, 00:14:00.221 "num_base_bdevs_operational": 2, 00:14:00.221 "process": { 00:14:00.221 "type": "rebuild", 00:14:00.221 "target": "spare", 00:14:00.221 "progress": { 00:14:00.221 "blocks": 30720, 00:14:00.221 "percent": 48 00:14:00.221 } 00:14:00.221 }, 00:14:00.221 "base_bdevs_list": [ 00:14:00.221 { 00:14:00.221 "name": "spare", 00:14:00.221 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:00.221 "is_configured": true, 00:14:00.221 "data_offset": 2048, 00:14:00.221 "data_size": 63488 00:14:00.221 }, 00:14:00.221 { 00:14:00.221 "name": "BaseBdev2", 00:14:00.221 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:00.221 "is_configured": true, 00:14:00.221 "data_offset": 2048, 00:14:00.221 "data_size": 63488 00:14:00.221 } 00:14:00.221 ] 00:14:00.221 }' 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.221 11:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.221 [2024-11-04 11:46:25.675228] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:00.481 127.20 IOPS, 381.60 MiB/s [2024-11-04T11:46:26.003Z] [2024-11-04 11:46:25.888928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 [2024-11-04 11:46:26.662386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.418 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.418 "name": "raid_bdev1", 00:14:01.418 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:01.418 "strip_size_kb": 0, 00:14:01.418 "state": "online", 00:14:01.418 "raid_level": "raid1", 00:14:01.418 "superblock": true, 00:14:01.418 "num_base_bdevs": 2, 00:14:01.418 "num_base_bdevs_discovered": 2, 00:14:01.418 "num_base_bdevs_operational": 2, 00:14:01.419 "process": { 00:14:01.419 "type": "rebuild", 00:14:01.419 "target": "spare", 00:14:01.419 "progress": { 00:14:01.419 "blocks": 51200, 00:14:01.419 "percent": 80 00:14:01.419 } 00:14:01.419 }, 00:14:01.419 "base_bdevs_list": [ 00:14:01.419 { 00:14:01.419 "name": "spare", 00:14:01.419 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:01.419 "is_configured": true, 00:14:01.419 "data_offset": 2048, 00:14:01.419 "data_size": 63488 00:14:01.419 }, 00:14:01.419 { 00:14:01.419 "name": "BaseBdev2", 00:14:01.419 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:01.419 "is_configured": true, 00:14:01.419 "data_offset": 2048, 00:14:01.419 "data_size": 63488 00:14:01.419 } 00:14:01.419 ] 00:14:01.419 }' 00:14:01.419 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.419 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.419 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.419 116.33 IOPS, 349.00 MiB/s [2024-11-04T11:46:26.941Z] 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.419 11:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.987 [2024-11-04 11:46:27.311972] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:01.987 [2024-11-04 11:46:27.411804] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:01.987 [2024-11-04 11:46:27.414419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.556 106.14 IOPS, 318.43 MiB/s [2024-11-04T11:46:28.078Z] 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.556 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.556 "name": "raid_bdev1", 00:14:02.556 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:02.556 "strip_size_kb": 0, 00:14:02.556 "state": "online", 00:14:02.556 "raid_level": "raid1", 00:14:02.556 "superblock": true, 00:14:02.556 "num_base_bdevs": 2, 00:14:02.556 "num_base_bdevs_discovered": 2, 00:14:02.556 "num_base_bdevs_operational": 2, 00:14:02.556 "base_bdevs_list": [ 00:14:02.556 { 00:14:02.556 "name": "spare", 00:14:02.556 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:02.556 "is_configured": true, 00:14:02.556 "data_offset": 2048, 00:14:02.557 "data_size": 63488 00:14:02.557 }, 00:14:02.557 { 00:14:02.557 "name": "BaseBdev2", 00:14:02.557 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:02.557 "is_configured": true, 00:14:02.557 "data_offset": 2048, 00:14:02.557 "data_size": 63488 00:14:02.557 } 00:14:02.557 ] 00:14:02.557 }' 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.557 "name": "raid_bdev1", 00:14:02.557 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:02.557 "strip_size_kb": 0, 00:14:02.557 "state": "online", 00:14:02.557 "raid_level": "raid1", 00:14:02.557 "superblock": true, 00:14:02.557 "num_base_bdevs": 2, 00:14:02.557 "num_base_bdevs_discovered": 2, 00:14:02.557 "num_base_bdevs_operational": 2, 00:14:02.557 "base_bdevs_list": [ 00:14:02.557 { 00:14:02.557 "name": "spare", 00:14:02.557 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:02.557 "is_configured": true, 00:14:02.557 "data_offset": 2048, 00:14:02.557 "data_size": 63488 00:14:02.557 }, 00:14:02.557 { 00:14:02.557 "name": "BaseBdev2", 00:14:02.557 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:02.557 "is_configured": true, 00:14:02.557 "data_offset": 2048, 00:14:02.557 "data_size": 63488 00:14:02.557 } 00:14:02.557 ] 00:14:02.557 }' 00:14:02.557 11:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.557 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.817 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.817 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.817 "name": "raid_bdev1", 00:14:02.817 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:02.817 "strip_size_kb": 0, 00:14:02.817 "state": "online", 00:14:02.817 "raid_level": "raid1", 00:14:02.817 "superblock": true, 00:14:02.817 "num_base_bdevs": 2, 00:14:02.817 "num_base_bdevs_discovered": 2, 00:14:02.817 "num_base_bdevs_operational": 2, 00:14:02.817 "base_bdevs_list": [ 00:14:02.817 { 00:14:02.817 "name": "spare", 00:14:02.817 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:02.817 "is_configured": true, 00:14:02.817 "data_offset": 2048, 00:14:02.817 "data_size": 63488 00:14:02.817 }, 00:14:02.817 { 00:14:02.817 "name": "BaseBdev2", 00:14:02.817 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:02.817 "is_configured": true, 00:14:02.817 "data_offset": 2048, 00:14:02.817 "data_size": 63488 00:14:02.817 } 00:14:02.817 ] 00:14:02.817 }' 00:14:02.817 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.817 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.077 [2024-11-04 11:46:28.464941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:03.077 [2024-11-04 11:46:28.465032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.077 00:14:03.077 Latency(us) 00:14:03.077 [2024-11-04T11:46:28.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.077 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:03.077 raid_bdev1 : 7.75 98.40 295.21 0.00 0.00 13868.37 327.32 108520.75 00:14:03.077 [2024-11-04T11:46:28.599Z] =================================================================================================================== 00:14:03.077 [2024-11-04T11:46:28.599Z] Total : 98.40 295.21 0.00 0.00 13868.37 327.32 108520.75 00:14:03.077 [2024-11-04 11:46:28.550238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.077 [2024-11-04 11:46:28.550356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.077 [2024-11-04 11:46:28.550527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.077 [2024-11-04 11:46:28.550594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:03.077 { 00:14:03.077 "results": [ 00:14:03.077 { 00:14:03.077 "job": "raid_bdev1", 00:14:03.077 "core_mask": "0x1", 00:14:03.077 "workload": "randrw", 00:14:03.077 "percentage": 50, 00:14:03.077 "status": "finished", 00:14:03.077 "queue_depth": 2, 00:14:03.077 "io_size": 3145728, 00:14:03.077 "runtime": 7.753826, 00:14:03.077 "iops": 98.40303354756736, 00:14:03.077 "mibps": 295.2091006427021, 00:14:03.077 "io_failed": 0, 00:14:03.077 "io_timeout": 0, 00:14:03.077 "avg_latency_us": 13868.366281112822, 00:14:03.077 "min_latency_us": 327.32227074235806, 00:14:03.077 "max_latency_us": 108520.74759825328 00:14:03.077 } 00:14:03.077 ], 00:14:03.077 "core_count": 1 00:14:03.077 } 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:03.077 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:03.336 /dev/nbd0 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.336 1+0 records in 00:14:03.336 1+0 records out 00:14:03.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316632 s, 12.9 MB/s 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:03.336 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:03.595 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.596 11:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:03.596 /dev/nbd1 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.596 1+0 records in 00:14:03.596 1+0 records out 00:14:03.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280276 s, 14.6 MB/s 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.596 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.855 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.114 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.374 [2024-11-04 11:46:29.750707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.374 [2024-11-04 11:46:29.750809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.374 [2024-11-04 11:46:29.750863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:04.374 [2024-11-04 11:46:29.750894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.374 [2024-11-04 11:46:29.753163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.374 [2024-11-04 11:46:29.753243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.374 [2024-11-04 11:46:29.753357] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.374 [2024-11-04 11:46:29.753444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.374 [2024-11-04 11:46:29.753666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.374 spare 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.374 [2024-11-04 11:46:29.853632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:04.374 [2024-11-04 11:46:29.853756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:04.374 [2024-11-04 11:46:29.854186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:04.374 [2024-11-04 11:46:29.854456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:04.374 [2024-11-04 11:46:29.854516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:04.374 [2024-11-04 11:46:29.854750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.374 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.634 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.634 "name": "raid_bdev1", 00:14:04.634 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:04.634 "strip_size_kb": 0, 00:14:04.634 "state": "online", 00:14:04.634 "raid_level": "raid1", 00:14:04.634 "superblock": true, 00:14:04.634 "num_base_bdevs": 2, 00:14:04.634 "num_base_bdevs_discovered": 2, 00:14:04.634 "num_base_bdevs_operational": 2, 00:14:04.634 "base_bdevs_list": [ 00:14:04.634 { 00:14:04.634 "name": "spare", 00:14:04.634 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:04.634 "is_configured": true, 00:14:04.634 "data_offset": 2048, 00:14:04.634 "data_size": 63488 00:14:04.634 }, 00:14:04.634 { 00:14:04.634 "name": "BaseBdev2", 00:14:04.634 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:04.634 "is_configured": true, 00:14:04.634 "data_offset": 2048, 00:14:04.634 "data_size": 63488 00:14:04.634 } 00:14:04.634 ] 00:14:04.634 }' 00:14:04.634 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.634 11:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.893 "name": "raid_bdev1", 00:14:04.893 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:04.893 "strip_size_kb": 0, 00:14:04.893 "state": "online", 00:14:04.893 "raid_level": "raid1", 00:14:04.893 "superblock": true, 00:14:04.893 "num_base_bdevs": 2, 00:14:04.893 "num_base_bdevs_discovered": 2, 00:14:04.893 "num_base_bdevs_operational": 2, 00:14:04.893 "base_bdevs_list": [ 00:14:04.893 { 00:14:04.893 "name": "spare", 00:14:04.893 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:04.893 "is_configured": true, 00:14:04.893 "data_offset": 2048, 00:14:04.893 "data_size": 63488 00:14:04.893 }, 00:14:04.893 { 00:14:04.893 "name": "BaseBdev2", 00:14:04.893 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:04.893 "is_configured": true, 00:14:04.893 "data_offset": 2048, 00:14:04.893 "data_size": 63488 00:14:04.893 } 00:14:04.893 ] 00:14:04.893 }' 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.893 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.152 [2024-11-04 11:46:30.465792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.152 "name": "raid_bdev1", 00:14:05.152 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:05.152 "strip_size_kb": 0, 00:14:05.152 "state": "online", 00:14:05.152 "raid_level": "raid1", 00:14:05.152 "superblock": true, 00:14:05.152 "num_base_bdevs": 2, 00:14:05.152 "num_base_bdevs_discovered": 1, 00:14:05.152 "num_base_bdevs_operational": 1, 00:14:05.152 "base_bdevs_list": [ 00:14:05.152 { 00:14:05.152 "name": null, 00:14:05.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.152 "is_configured": false, 00:14:05.152 "data_offset": 0, 00:14:05.152 "data_size": 63488 00:14:05.152 }, 00:14:05.152 { 00:14:05.152 "name": "BaseBdev2", 00:14:05.152 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:05.152 "is_configured": true, 00:14:05.152 "data_offset": 2048, 00:14:05.152 "data_size": 63488 00:14:05.152 } 00:14:05.152 ] 00:14:05.152 }' 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.152 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.412 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.412 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.412 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.678 [2024-11-04 11:46:30.933091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.678 [2024-11-04 11:46:30.933381] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.678 [2024-11-04 11:46:30.933453] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.678 [2024-11-04 11:46:30.933536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.678 [2024-11-04 11:46:30.950203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:05.678 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.678 11:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:05.678 [2024-11-04 11:46:30.952177] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.626 11:46:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.626 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.626 "name": "raid_bdev1", 00:14:06.626 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:06.626 "strip_size_kb": 0, 00:14:06.626 "state": "online", 00:14:06.626 "raid_level": "raid1", 00:14:06.626 "superblock": true, 00:14:06.626 "num_base_bdevs": 2, 00:14:06.626 "num_base_bdevs_discovered": 2, 00:14:06.626 "num_base_bdevs_operational": 2, 00:14:06.626 "process": { 00:14:06.626 "type": "rebuild", 00:14:06.626 "target": "spare", 00:14:06.626 "progress": { 00:14:06.626 "blocks": 20480, 00:14:06.626 "percent": 32 00:14:06.626 } 00:14:06.626 }, 00:14:06.626 "base_bdevs_list": [ 00:14:06.626 { 00:14:06.626 "name": "spare", 00:14:06.626 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:06.626 "is_configured": true, 00:14:06.626 "data_offset": 2048, 00:14:06.626 "data_size": 63488 00:14:06.626 }, 00:14:06.626 { 00:14:06.626 "name": "BaseBdev2", 00:14:06.626 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:06.626 "is_configured": true, 00:14:06.626 "data_offset": 2048, 00:14:06.626 "data_size": 63488 00:14:06.626 } 00:14:06.626 ] 00:14:06.626 }' 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.627 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.627 [2024-11-04 11:46:32.115902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.885 [2024-11-04 11:46:32.158108] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.885 [2024-11-04 11:46:32.158182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.885 [2024-11-04 11:46:32.158201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.885 [2024-11-04 11:46:32.158209] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.885 "name": "raid_bdev1", 00:14:06.885 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:06.885 "strip_size_kb": 0, 00:14:06.885 "state": "online", 00:14:06.885 "raid_level": "raid1", 00:14:06.885 "superblock": true, 00:14:06.885 "num_base_bdevs": 2, 00:14:06.885 "num_base_bdevs_discovered": 1, 00:14:06.885 "num_base_bdevs_operational": 1, 00:14:06.885 "base_bdevs_list": [ 00:14:06.885 { 00:14:06.885 "name": null, 00:14:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.885 "is_configured": false, 00:14:06.885 "data_offset": 0, 00:14:06.885 "data_size": 63488 00:14:06.885 }, 00:14:06.885 { 00:14:06.885 "name": "BaseBdev2", 00:14:06.885 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:06.885 "is_configured": true, 00:14:06.885 "data_offset": 2048, 00:14:06.885 "data_size": 63488 00:14:06.885 } 00:14:06.885 ] 00:14:06.885 }' 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.885 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.144 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.144 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.144 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.403 [2024-11-04 11:46:32.670147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.403 [2024-11-04 11:46:32.670545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.403 [2024-11-04 11:46:32.670617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:07.403 [2024-11-04 11:46:32.670662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.403 [2024-11-04 11:46:32.671190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.403 [2024-11-04 11:46:32.671435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.403 [2024-11-04 11:46:32.671636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.403 [2024-11-04 11:46:32.671686] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.403 [2024-11-04 11:46:32.671738] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:07.403 [2024-11-04 11:46:32.671862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.404 [2024-11-04 11:46:32.689068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:07.404 spare 00:14:07.404 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.404 11:46:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:07.404 [2024-11-04 11:46:32.691059] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.340 "name": "raid_bdev1", 00:14:08.340 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:08.340 "strip_size_kb": 0, 00:14:08.340 "state": "online", 00:14:08.340 "raid_level": "raid1", 00:14:08.340 "superblock": true, 00:14:08.340 "num_base_bdevs": 2, 00:14:08.340 "num_base_bdevs_discovered": 2, 00:14:08.340 "num_base_bdevs_operational": 2, 00:14:08.340 "process": { 00:14:08.340 "type": "rebuild", 00:14:08.340 "target": "spare", 00:14:08.340 "progress": { 00:14:08.340 "blocks": 20480, 00:14:08.340 "percent": 32 00:14:08.340 } 00:14:08.340 }, 00:14:08.340 "base_bdevs_list": [ 00:14:08.340 { 00:14:08.340 "name": "spare", 00:14:08.340 "uuid": "fcbb83d2-b851-57f7-bcc3-4b522c832c13", 00:14:08.340 "is_configured": true, 00:14:08.340 "data_offset": 2048, 00:14:08.340 "data_size": 63488 00:14:08.340 }, 00:14:08.340 { 00:14:08.340 "name": "BaseBdev2", 00:14:08.340 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:08.340 "is_configured": true, 00:14:08.340 "data_offset": 2048, 00:14:08.340 "data_size": 63488 00:14:08.340 } 00:14:08.340 ] 00:14:08.340 }' 00:14:08.340 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.341 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.341 [2024-11-04 11:46:33.814634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.600 [2024-11-04 11:46:33.896865] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.600 [2024-11-04 11:46:33.897413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.600 [2024-11-04 11:46:33.897439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.600 [2024-11-04 11:46:33.897452] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.600 "name": "raid_bdev1", 00:14:08.600 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:08.600 "strip_size_kb": 0, 00:14:08.600 "state": "online", 00:14:08.600 "raid_level": "raid1", 00:14:08.600 "superblock": true, 00:14:08.600 "num_base_bdevs": 2, 00:14:08.600 "num_base_bdevs_discovered": 1, 00:14:08.600 "num_base_bdevs_operational": 1, 00:14:08.600 "base_bdevs_list": [ 00:14:08.600 { 00:14:08.600 "name": null, 00:14:08.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.600 "is_configured": false, 00:14:08.600 "data_offset": 0, 00:14:08.600 "data_size": 63488 00:14:08.600 }, 00:14:08.600 { 00:14:08.600 "name": "BaseBdev2", 00:14:08.600 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:08.600 "is_configured": true, 00:14:08.600 "data_offset": 2048, 00:14:08.600 "data_size": 63488 00:14:08.600 } 00:14:08.600 ] 00:14:08.600 }' 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.600 11:46:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.167 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.167 "name": "raid_bdev1", 00:14:09.167 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:09.167 "strip_size_kb": 0, 00:14:09.167 "state": "online", 00:14:09.168 "raid_level": "raid1", 00:14:09.168 "superblock": true, 00:14:09.168 "num_base_bdevs": 2, 00:14:09.168 "num_base_bdevs_discovered": 1, 00:14:09.168 "num_base_bdevs_operational": 1, 00:14:09.168 "base_bdevs_list": [ 00:14:09.168 { 00:14:09.168 "name": null, 00:14:09.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.168 "is_configured": false, 00:14:09.168 "data_offset": 0, 00:14:09.168 "data_size": 63488 00:14:09.168 }, 00:14:09.168 { 00:14:09.168 "name": "BaseBdev2", 00:14:09.168 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:09.168 "is_configured": true, 00:14:09.168 "data_offset": 2048, 00:14:09.168 "data_size": 63488 00:14:09.168 } 00:14:09.168 ] 00:14:09.168 }' 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.168 [2024-11-04 11:46:34.572253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.168 [2024-11-04 11:46:34.572404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.168 [2024-11-04 11:46:34.572433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:09.168 [2024-11-04 11:46:34.572446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.168 [2024-11-04 11:46:34.572964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.168 [2024-11-04 11:46:34.573001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.168 [2024-11-04 11:46:34.573097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:09.168 [2024-11-04 11:46:34.573118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.168 [2024-11-04 11:46:34.573127] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.168 [2024-11-04 11:46:34.573140] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:09.168 BaseBdev1 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.168 11:46:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.117 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.117 "name": "raid_bdev1", 00:14:10.117 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:10.117 "strip_size_kb": 0, 00:14:10.117 "state": "online", 00:14:10.117 "raid_level": "raid1", 00:14:10.117 "superblock": true, 00:14:10.117 "num_base_bdevs": 2, 00:14:10.118 "num_base_bdevs_discovered": 1, 00:14:10.118 "num_base_bdevs_operational": 1, 00:14:10.118 "base_bdevs_list": [ 00:14:10.118 { 00:14:10.118 "name": null, 00:14:10.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.118 "is_configured": false, 00:14:10.118 "data_offset": 0, 00:14:10.118 "data_size": 63488 00:14:10.118 }, 00:14:10.118 { 00:14:10.118 "name": "BaseBdev2", 00:14:10.118 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:10.118 "is_configured": true, 00:14:10.118 "data_offset": 2048, 00:14:10.118 "data_size": 63488 00:14:10.118 } 00:14:10.118 ] 00:14:10.118 }' 00:14:10.118 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.118 11:46:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.684 "name": "raid_bdev1", 00:14:10.684 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:10.684 "strip_size_kb": 0, 00:14:10.684 "state": "online", 00:14:10.684 "raid_level": "raid1", 00:14:10.684 "superblock": true, 00:14:10.684 "num_base_bdevs": 2, 00:14:10.684 "num_base_bdevs_discovered": 1, 00:14:10.684 "num_base_bdevs_operational": 1, 00:14:10.684 "base_bdevs_list": [ 00:14:10.684 { 00:14:10.684 "name": null, 00:14:10.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.684 "is_configured": false, 00:14:10.684 "data_offset": 0, 00:14:10.684 "data_size": 63488 00:14:10.684 }, 00:14:10.684 { 00:14:10.684 "name": "BaseBdev2", 00:14:10.684 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:10.684 "is_configured": true, 00:14:10.684 "data_offset": 2048, 00:14:10.684 "data_size": 63488 00:14:10.684 } 00:14:10.684 ] 00:14:10.684 }' 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.684 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.685 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.685 [2024-11-04 11:46:36.198053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.685 [2024-11-04 11:46:36.198274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:10.685 [2024-11-04 11:46:36.198342] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:10.685 request: 00:14:10.685 { 00:14:10.944 "base_bdev": "BaseBdev1", 00:14:10.944 "raid_bdev": "raid_bdev1", 00:14:10.944 "method": "bdev_raid_add_base_bdev", 00:14:10.944 "req_id": 1 00:14:10.944 } 00:14:10.944 Got JSON-RPC error response 00:14:10.944 response: 00:14:10.944 { 00:14:10.944 "code": -22, 00:14:10.944 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:10.944 } 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.944 11:46:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.883 "name": "raid_bdev1", 00:14:11.883 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:11.883 "strip_size_kb": 0, 00:14:11.883 "state": "online", 00:14:11.883 "raid_level": "raid1", 00:14:11.883 "superblock": true, 00:14:11.883 "num_base_bdevs": 2, 00:14:11.883 "num_base_bdevs_discovered": 1, 00:14:11.883 "num_base_bdevs_operational": 1, 00:14:11.883 "base_bdevs_list": [ 00:14:11.883 { 00:14:11.883 "name": null, 00:14:11.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.883 "is_configured": false, 00:14:11.883 "data_offset": 0, 00:14:11.883 "data_size": 63488 00:14:11.883 }, 00:14:11.883 { 00:14:11.883 "name": "BaseBdev2", 00:14:11.883 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:11.883 "is_configured": true, 00:14:11.883 "data_offset": 2048, 00:14:11.883 "data_size": 63488 00:14:11.883 } 00:14:11.883 ] 00:14:11.883 }' 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.883 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.452 "name": "raid_bdev1", 00:14:12.452 "uuid": "a59e3d5b-14cd-4923-b03f-8b4dca9ca49f", 00:14:12.452 "strip_size_kb": 0, 00:14:12.452 "state": "online", 00:14:12.452 "raid_level": "raid1", 00:14:12.452 "superblock": true, 00:14:12.452 "num_base_bdevs": 2, 00:14:12.452 "num_base_bdevs_discovered": 1, 00:14:12.452 "num_base_bdevs_operational": 1, 00:14:12.452 "base_bdevs_list": [ 00:14:12.452 { 00:14:12.452 "name": null, 00:14:12.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.452 "is_configured": false, 00:14:12.452 "data_offset": 0, 00:14:12.452 "data_size": 63488 00:14:12.452 }, 00:14:12.452 { 00:14:12.452 "name": "BaseBdev2", 00:14:12.452 "uuid": "00810211-1fa6-56d1-ac4e-1f7aec1d87ad", 00:14:12.452 "is_configured": true, 00:14:12.452 "data_offset": 2048, 00:14:12.452 "data_size": 63488 00:14:12.452 } 00:14:12.452 ] 00:14:12.452 }' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77097 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77097 ']' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77097 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77097 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.452 killing process with pid 77097 00:14:12.452 Received shutdown signal, test time was about 17.098465 seconds 00:14:12.452 00:14:12.452 Latency(us) 00:14:12.452 [2024-11-04T11:46:37.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.452 [2024-11-04T11:46:37.974Z] =================================================================================================================== 00:14:12.452 [2024-11-04T11:46:37.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77097' 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77097 00:14:12.452 [2024-11-04 11:46:37.851403] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.452 [2024-11-04 11:46:37.851544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.452 [2024-11-04 11:46:37.851604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.452 11:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77097 00:14:12.452 [2024-11-04 11:46:37.851614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:12.711 [2024-11-04 11:46:38.090629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:14.090 00:14:14.090 real 0m20.435s 00:14:14.090 user 0m26.706s 00:14:14.090 sys 0m2.198s 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.090 ************************************ 00:14:14.090 END TEST raid_rebuild_test_sb_io 00:14:14.090 ************************************ 00:14:14.090 11:46:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:14.090 11:46:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:14.090 11:46:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:14.090 11:46:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.090 11:46:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.090 ************************************ 00:14:14.090 START TEST raid_rebuild_test 00:14:14.090 ************************************ 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77791 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77791 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77791 ']' 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.090 11:46:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.090 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:14.090 Zero copy mechanism will not be used. 00:14:14.090 [2024-11-04 11:46:39.470674] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:14:14.090 [2024-11-04 11:46:39.470792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77791 ] 00:14:14.356 [2024-11-04 11:46:39.623142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.356 [2024-11-04 11:46:39.742280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.616 [2024-11-04 11:46:39.947041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.616 [2024-11-04 11:46:39.947078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.875 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 BaseBdev1_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 [2024-11-04 11:46:40.417596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:15.135 [2024-11-04 11:46:40.417737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.135 [2024-11-04 11:46:40.417810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:15.135 [2024-11-04 11:46:40.417858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.135 [2024-11-04 11:46:40.420056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.135 [2024-11-04 11:46:40.420142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.135 BaseBdev1 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 BaseBdev2_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 [2024-11-04 11:46:40.475562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:15.135 [2024-11-04 11:46:40.475673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.135 [2024-11-04 11:46:40.475712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:15.135 [2024-11-04 11:46:40.475741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.135 [2024-11-04 11:46:40.478004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.135 [2024-11-04 11:46:40.478083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.135 BaseBdev2 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 BaseBdev3_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 [2024-11-04 11:46:40.548683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:15.135 [2024-11-04 11:46:40.548812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.135 [2024-11-04 11:46:40.548873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:15.135 [2024-11-04 11:46:40.548916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.135 [2024-11-04 11:46:40.551432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.135 [2024-11-04 11:46:40.551517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.135 BaseBdev3 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 BaseBdev4_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.135 [2024-11-04 11:46:40.605458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:15.135 [2024-11-04 11:46:40.605566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.135 [2024-11-04 11:46:40.605644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.135 [2024-11-04 11:46:40.605687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.135 [2024-11-04 11:46:40.608008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.135 [2024-11-04 11:46:40.608091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:15.135 BaseBdev4 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.135 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 spare_malloc 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 spare_delay 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 [2024-11-04 11:46:40.676880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.395 [2024-11-04 11:46:40.677031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.395 [2024-11-04 11:46:40.677104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:15.395 [2024-11-04 11:46:40.677152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.395 [2024-11-04 11:46:40.679756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.395 [2024-11-04 11:46:40.679845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.395 spare 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 [2024-11-04 11:46:40.688969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.395 [2024-11-04 11:46:40.691036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.395 [2024-11-04 11:46:40.691171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.395 [2024-11-04 11:46:40.691301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.395 [2024-11-04 11:46:40.691480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:15.395 [2024-11-04 11:46:40.691534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:15.395 [2024-11-04 11:46:40.691908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:15.395 [2024-11-04 11:46:40.692189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:15.395 [2024-11-04 11:46:40.692251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:15.395 [2024-11-04 11:46:40.692520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.395 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.395 "name": "raid_bdev1", 00:14:15.395 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:15.395 "strip_size_kb": 0, 00:14:15.395 "state": "online", 00:14:15.395 "raid_level": "raid1", 00:14:15.395 "superblock": false, 00:14:15.395 "num_base_bdevs": 4, 00:14:15.395 "num_base_bdevs_discovered": 4, 00:14:15.395 "num_base_bdevs_operational": 4, 00:14:15.395 "base_bdevs_list": [ 00:14:15.395 { 00:14:15.395 "name": "BaseBdev1", 00:14:15.395 "uuid": "41ad59e8-3ec1-5815-82e5-c2482d91f547", 00:14:15.395 "is_configured": true, 00:14:15.395 "data_offset": 0, 00:14:15.395 "data_size": 65536 00:14:15.395 }, 00:14:15.395 { 00:14:15.395 "name": "BaseBdev2", 00:14:15.395 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:15.395 "is_configured": true, 00:14:15.395 "data_offset": 0, 00:14:15.395 "data_size": 65536 00:14:15.395 }, 00:14:15.395 { 00:14:15.395 "name": "BaseBdev3", 00:14:15.395 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:15.395 "is_configured": true, 00:14:15.395 "data_offset": 0, 00:14:15.395 "data_size": 65536 00:14:15.395 }, 00:14:15.395 { 00:14:15.395 "name": "BaseBdev4", 00:14:15.395 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:15.395 "is_configured": true, 00:14:15.395 "data_offset": 0, 00:14:15.395 "data_size": 65536 00:14:15.395 } 00:14:15.395 ] 00:14:15.396 }' 00:14:15.396 11:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.396 11:46:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:15.655 [2024-11-04 11:46:41.132581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.655 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:15.914 [2024-11-04 11:46:41.395783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:15.914 /dev/nbd0 00:14:15.914 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:16.174 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:16.174 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:16.174 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.175 1+0 records in 00:14:16.175 1+0 records out 00:14:16.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578495 s, 7.1 MB/s 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:16.175 11:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:22.748 65536+0 records in 00:14:22.748 65536+0 records out 00:14:22.748 33554432 bytes (34 MB, 32 MiB) copied, 5.86626 s, 5.7 MB/s 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:22.748 [2024-11-04 11:46:47.534549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.748 [2024-11-04 11:46:47.570766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.748 "name": "raid_bdev1", 00:14:22.748 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:22.748 "strip_size_kb": 0, 00:14:22.748 "state": "online", 00:14:22.748 "raid_level": "raid1", 00:14:22.748 "superblock": false, 00:14:22.748 "num_base_bdevs": 4, 00:14:22.748 "num_base_bdevs_discovered": 3, 00:14:22.748 "num_base_bdevs_operational": 3, 00:14:22.748 "base_bdevs_list": [ 00:14:22.748 { 00:14:22.748 "name": null, 00:14:22.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.748 "is_configured": false, 00:14:22.748 "data_offset": 0, 00:14:22.748 "data_size": 65536 00:14:22.748 }, 00:14:22.748 { 00:14:22.748 "name": "BaseBdev2", 00:14:22.748 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:22.748 "is_configured": true, 00:14:22.748 "data_offset": 0, 00:14:22.748 "data_size": 65536 00:14:22.748 }, 00:14:22.748 { 00:14:22.748 "name": "BaseBdev3", 00:14:22.748 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:22.748 "is_configured": true, 00:14:22.748 "data_offset": 0, 00:14:22.748 "data_size": 65536 00:14:22.748 }, 00:14:22.748 { 00:14:22.748 "name": "BaseBdev4", 00:14:22.748 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:22.748 "is_configured": true, 00:14:22.748 "data_offset": 0, 00:14:22.748 "data_size": 65536 00:14:22.748 } 00:14:22.748 ] 00:14:22.748 }' 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.748 11:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.748 11:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:22.748 11:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.748 11:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.748 [2024-11-04 11:46:48.065957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:22.748 [2024-11-04 11:46:48.084077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:22.748 11:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.748 [2024-11-04 11:46:48.086368] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.748 11:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.686 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.686 "name": "raid_bdev1", 00:14:23.686 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:23.686 "strip_size_kb": 0, 00:14:23.686 "state": "online", 00:14:23.686 "raid_level": "raid1", 00:14:23.686 "superblock": false, 00:14:23.686 "num_base_bdevs": 4, 00:14:23.686 "num_base_bdevs_discovered": 4, 00:14:23.686 "num_base_bdevs_operational": 4, 00:14:23.686 "process": { 00:14:23.686 "type": "rebuild", 00:14:23.686 "target": "spare", 00:14:23.686 "progress": { 00:14:23.686 "blocks": 20480, 00:14:23.686 "percent": 31 00:14:23.686 } 00:14:23.686 }, 00:14:23.686 "base_bdevs_list": [ 00:14:23.686 { 00:14:23.686 "name": "spare", 00:14:23.686 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:23.686 "is_configured": true, 00:14:23.686 "data_offset": 0, 00:14:23.686 "data_size": 65536 00:14:23.687 }, 00:14:23.687 { 00:14:23.687 "name": "BaseBdev2", 00:14:23.687 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:23.687 "is_configured": true, 00:14:23.687 "data_offset": 0, 00:14:23.687 "data_size": 65536 00:14:23.687 }, 00:14:23.687 { 00:14:23.687 "name": "BaseBdev3", 00:14:23.687 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:23.687 "is_configured": true, 00:14:23.687 "data_offset": 0, 00:14:23.687 "data_size": 65536 00:14:23.687 }, 00:14:23.687 { 00:14:23.687 "name": "BaseBdev4", 00:14:23.687 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:23.687 "is_configured": true, 00:14:23.687 "data_offset": 0, 00:14:23.687 "data_size": 65536 00:14:23.687 } 00:14:23.687 ] 00:14:23.687 }' 00:14:23.687 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.687 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.687 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-11-04 11:46:49.253246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.946 [2024-11-04 11:46:49.292153] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:23.946 [2024-11-04 11:46:49.292240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.946 [2024-11-04 11:46:49.292257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:23.946 [2024-11-04 11:46:49.292267] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.946 "name": "raid_bdev1", 00:14:23.946 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:23.946 "strip_size_kb": 0, 00:14:23.946 "state": "online", 00:14:23.946 "raid_level": "raid1", 00:14:23.946 "superblock": false, 00:14:23.946 "num_base_bdevs": 4, 00:14:23.946 "num_base_bdevs_discovered": 3, 00:14:23.946 "num_base_bdevs_operational": 3, 00:14:23.946 "base_bdevs_list": [ 00:14:23.946 { 00:14:23.946 "name": null, 00:14:23.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.946 "is_configured": false, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "name": "BaseBdev2", 00:14:23.946 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:23.946 "is_configured": true, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "name": "BaseBdev3", 00:14:23.946 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:23.946 "is_configured": true, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 }, 00:14:23.946 { 00:14:23.946 "name": "BaseBdev4", 00:14:23.946 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:23.946 "is_configured": true, 00:14:23.946 "data_offset": 0, 00:14:23.946 "data_size": 65536 00:14:23.946 } 00:14:23.946 ] 00:14:23.946 }' 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.946 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.513 "name": "raid_bdev1", 00:14:24.513 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:24.513 "strip_size_kb": 0, 00:14:24.513 "state": "online", 00:14:24.513 "raid_level": "raid1", 00:14:24.513 "superblock": false, 00:14:24.513 "num_base_bdevs": 4, 00:14:24.513 "num_base_bdevs_discovered": 3, 00:14:24.513 "num_base_bdevs_operational": 3, 00:14:24.513 "base_bdevs_list": [ 00:14:24.513 { 00:14:24.513 "name": null, 00:14:24.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.513 "is_configured": false, 00:14:24.513 "data_offset": 0, 00:14:24.513 "data_size": 65536 00:14:24.513 }, 00:14:24.513 { 00:14:24.513 "name": "BaseBdev2", 00:14:24.513 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:24.513 "is_configured": true, 00:14:24.513 "data_offset": 0, 00:14:24.513 "data_size": 65536 00:14:24.513 }, 00:14:24.513 { 00:14:24.513 "name": "BaseBdev3", 00:14:24.513 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:24.513 "is_configured": true, 00:14:24.513 "data_offset": 0, 00:14:24.513 "data_size": 65536 00:14:24.513 }, 00:14:24.513 { 00:14:24.513 "name": "BaseBdev4", 00:14:24.513 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:24.513 "is_configured": true, 00:14:24.513 "data_offset": 0, 00:14:24.513 "data_size": 65536 00:14:24.513 } 00:14:24.513 ] 00:14:24.513 }' 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.513 [2024-11-04 11:46:49.909553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.513 [2024-11-04 11:46:49.926209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.513 11:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:24.513 [2024-11-04 11:46:49.928872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.449 11:46:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.709 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.709 "name": "raid_bdev1", 00:14:25.709 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:25.709 "strip_size_kb": 0, 00:14:25.709 "state": "online", 00:14:25.709 "raid_level": "raid1", 00:14:25.709 "superblock": false, 00:14:25.709 "num_base_bdevs": 4, 00:14:25.709 "num_base_bdevs_discovered": 4, 00:14:25.709 "num_base_bdevs_operational": 4, 00:14:25.709 "process": { 00:14:25.709 "type": "rebuild", 00:14:25.709 "target": "spare", 00:14:25.709 "progress": { 00:14:25.709 "blocks": 20480, 00:14:25.709 "percent": 31 00:14:25.709 } 00:14:25.709 }, 00:14:25.709 "base_bdevs_list": [ 00:14:25.709 { 00:14:25.709 "name": "spare", 00:14:25.709 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": "BaseBdev2", 00:14:25.709 "uuid": "50a33ff2-74e3-5f1d-ad12-345a7037696d", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": "BaseBdev3", 00:14:25.709 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": "BaseBdev4", 00:14:25.709 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 } 00:14:25.709 ] 00:14:25.709 }' 00:14:25.709 11:46:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.709 [2024-11-04 11:46:51.092570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.709 [2024-11-04 11:46:51.139263] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.709 "name": "raid_bdev1", 00:14:25.709 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:25.709 "strip_size_kb": 0, 00:14:25.709 "state": "online", 00:14:25.709 "raid_level": "raid1", 00:14:25.709 "superblock": false, 00:14:25.709 "num_base_bdevs": 4, 00:14:25.709 "num_base_bdevs_discovered": 3, 00:14:25.709 "num_base_bdevs_operational": 3, 00:14:25.709 "process": { 00:14:25.709 "type": "rebuild", 00:14:25.709 "target": "spare", 00:14:25.709 "progress": { 00:14:25.709 "blocks": 24576, 00:14:25.709 "percent": 37 00:14:25.709 } 00:14:25.709 }, 00:14:25.709 "base_bdevs_list": [ 00:14:25.709 { 00:14:25.709 "name": "spare", 00:14:25.709 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": null, 00:14:25.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.709 "is_configured": false, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": "BaseBdev3", 00:14:25.709 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 }, 00:14:25.709 { 00:14:25.709 "name": "BaseBdev4", 00:14:25.709 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:25.709 "is_configured": true, 00:14:25.709 "data_offset": 0, 00:14:25.709 "data_size": 65536 00:14:25.709 } 00:14:25.709 ] 00:14:25.709 }' 00:14:25.709 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.969 "name": "raid_bdev1", 00:14:25.969 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:25.969 "strip_size_kb": 0, 00:14:25.969 "state": "online", 00:14:25.969 "raid_level": "raid1", 00:14:25.969 "superblock": false, 00:14:25.969 "num_base_bdevs": 4, 00:14:25.969 "num_base_bdevs_discovered": 3, 00:14:25.969 "num_base_bdevs_operational": 3, 00:14:25.969 "process": { 00:14:25.969 "type": "rebuild", 00:14:25.969 "target": "spare", 00:14:25.969 "progress": { 00:14:25.969 "blocks": 26624, 00:14:25.969 "percent": 40 00:14:25.969 } 00:14:25.969 }, 00:14:25.969 "base_bdevs_list": [ 00:14:25.969 { 00:14:25.969 "name": "spare", 00:14:25.969 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:25.969 "is_configured": true, 00:14:25.969 "data_offset": 0, 00:14:25.969 "data_size": 65536 00:14:25.969 }, 00:14:25.969 { 00:14:25.969 "name": null, 00:14:25.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.969 "is_configured": false, 00:14:25.969 "data_offset": 0, 00:14:25.969 "data_size": 65536 00:14:25.969 }, 00:14:25.969 { 00:14:25.969 "name": "BaseBdev3", 00:14:25.969 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:25.969 "is_configured": true, 00:14:25.969 "data_offset": 0, 00:14:25.969 "data_size": 65536 00:14:25.969 }, 00:14:25.969 { 00:14:25.969 "name": "BaseBdev4", 00:14:25.969 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:25.969 "is_configured": true, 00:14:25.969 "data_offset": 0, 00:14:25.969 "data_size": 65536 00:14:25.969 } 00:14:25.969 ] 00:14:25.969 }' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.969 11:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.455 "name": "raid_bdev1", 00:14:27.455 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:27.455 "strip_size_kb": 0, 00:14:27.455 "state": "online", 00:14:27.455 "raid_level": "raid1", 00:14:27.455 "superblock": false, 00:14:27.455 "num_base_bdevs": 4, 00:14:27.455 "num_base_bdevs_discovered": 3, 00:14:27.455 "num_base_bdevs_operational": 3, 00:14:27.455 "process": { 00:14:27.455 "type": "rebuild", 00:14:27.455 "target": "spare", 00:14:27.455 "progress": { 00:14:27.455 "blocks": 49152, 00:14:27.455 "percent": 75 00:14:27.455 } 00:14:27.455 }, 00:14:27.455 "base_bdevs_list": [ 00:14:27.455 { 00:14:27.455 "name": "spare", 00:14:27.455 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:27.455 "is_configured": true, 00:14:27.455 "data_offset": 0, 00:14:27.455 "data_size": 65536 00:14:27.455 }, 00:14:27.455 { 00:14:27.455 "name": null, 00:14:27.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.455 "is_configured": false, 00:14:27.455 "data_offset": 0, 00:14:27.455 "data_size": 65536 00:14:27.455 }, 00:14:27.455 { 00:14:27.455 "name": "BaseBdev3", 00:14:27.455 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:27.455 "is_configured": true, 00:14:27.455 "data_offset": 0, 00:14:27.455 "data_size": 65536 00:14:27.455 }, 00:14:27.455 { 00:14:27.455 "name": "BaseBdev4", 00:14:27.455 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:27.455 "is_configured": true, 00:14:27.455 "data_offset": 0, 00:14:27.455 "data_size": 65536 00:14:27.455 } 00:14:27.455 ] 00:14:27.455 }' 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.455 11:46:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.714 [2024-11-04 11:46:53.156608] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:27.714 [2024-11-04 11:46:53.156928] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:27.714 [2024-11-04 11:46:53.157053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.282 "name": "raid_bdev1", 00:14:28.282 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:28.282 "strip_size_kb": 0, 00:14:28.282 "state": "online", 00:14:28.282 "raid_level": "raid1", 00:14:28.282 "superblock": false, 00:14:28.282 "num_base_bdevs": 4, 00:14:28.282 "num_base_bdevs_discovered": 3, 00:14:28.282 "num_base_bdevs_operational": 3, 00:14:28.282 "base_bdevs_list": [ 00:14:28.282 { 00:14:28.282 "name": "spare", 00:14:28.282 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:28.282 "is_configured": true, 00:14:28.282 "data_offset": 0, 00:14:28.282 "data_size": 65536 00:14:28.282 }, 00:14:28.282 { 00:14:28.282 "name": null, 00:14:28.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.282 "is_configured": false, 00:14:28.282 "data_offset": 0, 00:14:28.282 "data_size": 65536 00:14:28.282 }, 00:14:28.282 { 00:14:28.282 "name": "BaseBdev3", 00:14:28.282 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:28.282 "is_configured": true, 00:14:28.282 "data_offset": 0, 00:14:28.282 "data_size": 65536 00:14:28.282 }, 00:14:28.282 { 00:14:28.282 "name": "BaseBdev4", 00:14:28.282 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:28.282 "is_configured": true, 00:14:28.282 "data_offset": 0, 00:14:28.282 "data_size": 65536 00:14:28.282 } 00:14:28.282 ] 00:14:28.282 }' 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.282 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.283 "name": "raid_bdev1", 00:14:28.283 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:28.283 "strip_size_kb": 0, 00:14:28.283 "state": "online", 00:14:28.283 "raid_level": "raid1", 00:14:28.283 "superblock": false, 00:14:28.283 "num_base_bdevs": 4, 00:14:28.283 "num_base_bdevs_discovered": 3, 00:14:28.283 "num_base_bdevs_operational": 3, 00:14:28.283 "base_bdevs_list": [ 00:14:28.283 { 00:14:28.283 "name": "spare", 00:14:28.283 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:28.283 "is_configured": true, 00:14:28.283 "data_offset": 0, 00:14:28.283 "data_size": 65536 00:14:28.283 }, 00:14:28.283 { 00:14:28.283 "name": null, 00:14:28.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.283 "is_configured": false, 00:14:28.283 "data_offset": 0, 00:14:28.283 "data_size": 65536 00:14:28.283 }, 00:14:28.283 { 00:14:28.283 "name": "BaseBdev3", 00:14:28.283 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:28.283 "is_configured": true, 00:14:28.283 "data_offset": 0, 00:14:28.283 "data_size": 65536 00:14:28.283 }, 00:14:28.283 { 00:14:28.283 "name": "BaseBdev4", 00:14:28.283 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:28.283 "is_configured": true, 00:14:28.283 "data_offset": 0, 00:14:28.283 "data_size": 65536 00:14:28.283 } 00:14:28.283 ] 00:14:28.283 }' 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.283 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.543 "name": "raid_bdev1", 00:14:28.543 "uuid": "760f376a-1987-418b-8a7e-1a06fcc0b2d5", 00:14:28.543 "strip_size_kb": 0, 00:14:28.543 "state": "online", 00:14:28.543 "raid_level": "raid1", 00:14:28.543 "superblock": false, 00:14:28.543 "num_base_bdevs": 4, 00:14:28.543 "num_base_bdevs_discovered": 3, 00:14:28.543 "num_base_bdevs_operational": 3, 00:14:28.543 "base_bdevs_list": [ 00:14:28.543 { 00:14:28.543 "name": "spare", 00:14:28.543 "uuid": "2ee91fa0-ce59-5ce1-8ace-58fefaf2f309", 00:14:28.543 "is_configured": true, 00:14:28.543 "data_offset": 0, 00:14:28.543 "data_size": 65536 00:14:28.543 }, 00:14:28.543 { 00:14:28.543 "name": null, 00:14:28.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.543 "is_configured": false, 00:14:28.543 "data_offset": 0, 00:14:28.543 "data_size": 65536 00:14:28.543 }, 00:14:28.543 { 00:14:28.543 "name": "BaseBdev3", 00:14:28.543 "uuid": "123ceb20-4e72-514c-8b30-a211ccb0a60c", 00:14:28.543 "is_configured": true, 00:14:28.543 "data_offset": 0, 00:14:28.543 "data_size": 65536 00:14:28.543 }, 00:14:28.543 { 00:14:28.543 "name": "BaseBdev4", 00:14:28.543 "uuid": "76bc5d06-81b4-50ed-bc7e-3403b7676254", 00:14:28.543 "is_configured": true, 00:14:28.543 "data_offset": 0, 00:14:28.543 "data_size": 65536 00:14:28.543 } 00:14:28.543 ] 00:14:28.543 }' 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.543 11:46:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.801 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.801 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.801 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.802 [2024-11-04 11:46:54.267073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.802 [2024-11-04 11:46:54.267210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.802 [2024-11-04 11:46:54.267332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.802 [2024-11-04 11:46:54.267460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.802 [2024-11-04 11:46:54.267474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.802 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:29.060 /dev/nbd0 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:29.319 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.319 1+0 records in 00:14:29.319 1+0 records out 00:14:29.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473687 s, 8.6 MB/s 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.320 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:29.320 /dev/nbd1 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.578 1+0 records in 00:14:29.578 1+0 records out 00:14:29.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600432 s, 6.8 MB/s 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:29.578 11:46:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.578 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.837 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.837 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.837 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.837 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.838 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77791 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77791 ']' 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77791 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77791 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77791' 00:14:30.096 killing process with pid 77791 00:14:30.096 Received shutdown signal, test time was about 60.000000 seconds 00:14:30.096 00:14:30.096 Latency(us) 00:14:30.096 [2024-11-04T11:46:55.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.096 [2024-11-04T11:46:55.618Z] =================================================================================================================== 00:14:30.096 [2024-11-04T11:46:55.618Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77791 00:14:30.096 [2024-11-04 11:46:55.562416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.096 11:46:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77791 00:14:30.664 [2024-11-04 11:46:56.066316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.043 11:46:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:32.043 00:14:32.043 real 0m17.882s 00:14:32.043 user 0m19.977s 00:14:32.043 sys 0m3.220s 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:32.044 ************************************ 00:14:32.044 END TEST raid_rebuild_test 00:14:32.044 ************************************ 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.044 11:46:57 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:32.044 11:46:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:32.044 11:46:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:32.044 11:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.044 ************************************ 00:14:32.044 START TEST raid_rebuild_test_sb 00:14:32.044 ************************************ 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78238 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78238 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78238 ']' 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:32.044 11:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.044 [2024-11-04 11:46:57.421465] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:14:32.044 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:32.044 Zero copy mechanism will not be used. 00:14:32.044 [2024-11-04 11:46:57.421650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78238 ] 00:14:32.303 [2024-11-04 11:46:57.595360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.303 [2024-11-04 11:46:57.711261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.562 [2024-11-04 11:46:57.916459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.562 [2024-11-04 11:46:57.916529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.821 BaseBdev1_malloc 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.821 [2024-11-04 11:46:58.296534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:32.821 [2024-11-04 11:46:58.296648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.821 [2024-11-04 11:46:58.296714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:32.821 [2024-11-04 11:46:58.296757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.821 [2024-11-04 11:46:58.298774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.821 [2024-11-04 11:46:58.298846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.821 BaseBdev1 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.821 BaseBdev2_malloc 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.821 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:33.080 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.080 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.080 [2024-11-04 11:46:58.348108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:33.080 [2024-11-04 11:46:58.348235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.080 [2024-11-04 11:46:58.348263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:33.081 [2024-11-04 11:46:58.348278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.081 [2024-11-04 11:46:58.350600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.081 [2024-11-04 11:46:58.350638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.081 BaseBdev2 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 BaseBdev3_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 [2024-11-04 11:46:58.421363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:33.081 [2024-11-04 11:46:58.421501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.081 [2024-11-04 11:46:58.421556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:33.081 [2024-11-04 11:46:58.421593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.081 [2024-11-04 11:46:58.423970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.081 [2024-11-04 11:46:58.424053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:33.081 BaseBdev3 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 BaseBdev4_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 [2024-11-04 11:46:58.475894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:33.081 [2024-11-04 11:46:58.475954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.081 [2024-11-04 11:46:58.475972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:33.081 [2024-11-04 11:46:58.475983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.081 [2024-11-04 11:46:58.477991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.081 [2024-11-04 11:46:58.478033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:33.081 BaseBdev4 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 spare_malloc 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 spare_delay 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 [2024-11-04 11:46:58.547101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.081 [2024-11-04 11:46:58.547235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.081 [2024-11-04 11:46:58.547302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:33.081 [2024-11-04 11:46:58.547364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.081 [2024-11-04 11:46:58.549899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.081 [2024-11-04 11:46:58.549986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.081 spare 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 [2024-11-04 11:46:58.559132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.081 [2024-11-04 11:46:58.561149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.081 [2024-11-04 11:46:58.561270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.081 [2024-11-04 11:46:58.561409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.081 [2024-11-04 11:46:58.561656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:33.081 [2024-11-04 11:46:58.561712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.081 [2024-11-04 11:46:58.562050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:33.081 [2024-11-04 11:46:58.562302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:33.081 [2024-11-04 11:46:58.562351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:33.081 [2024-11-04 11:46:58.562650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.081 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.340 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.340 "name": "raid_bdev1", 00:14:33.340 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:33.340 "strip_size_kb": 0, 00:14:33.340 "state": "online", 00:14:33.340 "raid_level": "raid1", 00:14:33.340 "superblock": true, 00:14:33.340 "num_base_bdevs": 4, 00:14:33.340 "num_base_bdevs_discovered": 4, 00:14:33.340 "num_base_bdevs_operational": 4, 00:14:33.340 "base_bdevs_list": [ 00:14:33.340 { 00:14:33.340 "name": "BaseBdev1", 00:14:33.340 "uuid": "ac677cfc-09c8-59f8-a1c0-9bf1970774fa", 00:14:33.340 "is_configured": true, 00:14:33.340 "data_offset": 2048, 00:14:33.340 "data_size": 63488 00:14:33.340 }, 00:14:33.340 { 00:14:33.340 "name": "BaseBdev2", 00:14:33.340 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:33.340 "is_configured": true, 00:14:33.340 "data_offset": 2048, 00:14:33.340 "data_size": 63488 00:14:33.340 }, 00:14:33.340 { 00:14:33.340 "name": "BaseBdev3", 00:14:33.340 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:33.340 "is_configured": true, 00:14:33.340 "data_offset": 2048, 00:14:33.340 "data_size": 63488 00:14:33.340 }, 00:14:33.340 { 00:14:33.340 "name": "BaseBdev4", 00:14:33.340 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:33.340 "is_configured": true, 00:14:33.340 "data_offset": 2048, 00:14:33.340 "data_size": 63488 00:14:33.340 } 00:14:33.340 ] 00:14:33.340 }' 00:14:33.340 11:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.340 11:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.599 [2024-11-04 11:46:59.050693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.599 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:33.858 [2024-11-04 11:46:59.337837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:33.858 /dev/nbd0 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:33.858 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:34.117 1+0 records in 00:14:34.117 1+0 records out 00:14:34.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537217 s, 7.6 MB/s 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:34.117 11:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:39.387 63488+0 records in 00:14:39.387 63488+0 records out 00:14:39.387 32505856 bytes (33 MB, 31 MiB) copied, 5.44566 s, 6.0 MB/s 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.387 11:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.647 [2024-11-04 11:47:05.054656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.647 [2024-11-04 11:47:05.094694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.647 "name": "raid_bdev1", 00:14:39.647 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:39.647 "strip_size_kb": 0, 00:14:39.647 "state": "online", 00:14:39.647 "raid_level": "raid1", 00:14:39.647 "superblock": true, 00:14:39.647 "num_base_bdevs": 4, 00:14:39.647 "num_base_bdevs_discovered": 3, 00:14:39.647 "num_base_bdevs_operational": 3, 00:14:39.647 "base_bdevs_list": [ 00:14:39.647 { 00:14:39.647 "name": null, 00:14:39.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.647 "is_configured": false, 00:14:39.647 "data_offset": 0, 00:14:39.647 "data_size": 63488 00:14:39.647 }, 00:14:39.647 { 00:14:39.647 "name": "BaseBdev2", 00:14:39.647 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:39.647 "is_configured": true, 00:14:39.647 "data_offset": 2048, 00:14:39.647 "data_size": 63488 00:14:39.647 }, 00:14:39.647 { 00:14:39.647 "name": "BaseBdev3", 00:14:39.647 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:39.647 "is_configured": true, 00:14:39.647 "data_offset": 2048, 00:14:39.647 "data_size": 63488 00:14:39.647 }, 00:14:39.647 { 00:14:39.647 "name": "BaseBdev4", 00:14:39.647 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:39.647 "is_configured": true, 00:14:39.647 "data_offset": 2048, 00:14:39.647 "data_size": 63488 00:14:39.647 } 00:14:39.647 ] 00:14:39.647 }' 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.647 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.215 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.215 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.215 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.215 [2024-11-04 11:47:05.537949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.215 [2024-11-04 11:47:05.553976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:40.215 11:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.215 [2024-11-04 11:47:05.556265] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.215 11:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.151 "name": "raid_bdev1", 00:14:41.151 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:41.151 "strip_size_kb": 0, 00:14:41.151 "state": "online", 00:14:41.151 "raid_level": "raid1", 00:14:41.151 "superblock": true, 00:14:41.151 "num_base_bdevs": 4, 00:14:41.151 "num_base_bdevs_discovered": 4, 00:14:41.151 "num_base_bdevs_operational": 4, 00:14:41.151 "process": { 00:14:41.151 "type": "rebuild", 00:14:41.151 "target": "spare", 00:14:41.151 "progress": { 00:14:41.151 "blocks": 20480, 00:14:41.151 "percent": 32 00:14:41.151 } 00:14:41.151 }, 00:14:41.151 "base_bdevs_list": [ 00:14:41.151 { 00:14:41.151 "name": "spare", 00:14:41.151 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:41.151 "is_configured": true, 00:14:41.151 "data_offset": 2048, 00:14:41.151 "data_size": 63488 00:14:41.151 }, 00:14:41.151 { 00:14:41.151 "name": "BaseBdev2", 00:14:41.151 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:41.151 "is_configured": true, 00:14:41.151 "data_offset": 2048, 00:14:41.151 "data_size": 63488 00:14:41.151 }, 00:14:41.151 { 00:14:41.151 "name": "BaseBdev3", 00:14:41.151 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:41.151 "is_configured": true, 00:14:41.151 "data_offset": 2048, 00:14:41.151 "data_size": 63488 00:14:41.151 }, 00:14:41.151 { 00:14:41.151 "name": "BaseBdev4", 00:14:41.151 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:41.151 "is_configured": true, 00:14:41.151 "data_offset": 2048, 00:14:41.151 "data_size": 63488 00:14:41.151 } 00:14:41.151 ] 00:14:41.151 }' 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.151 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.410 [2024-11-04 11:47:06.723253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.410 [2024-11-04 11:47:06.762193] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.410 [2024-11-04 11:47:06.762385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.410 [2024-11-04 11:47:06.762501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.410 [2024-11-04 11:47:06.762558] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.410 "name": "raid_bdev1", 00:14:41.410 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:41.410 "strip_size_kb": 0, 00:14:41.410 "state": "online", 00:14:41.410 "raid_level": "raid1", 00:14:41.410 "superblock": true, 00:14:41.410 "num_base_bdevs": 4, 00:14:41.410 "num_base_bdevs_discovered": 3, 00:14:41.410 "num_base_bdevs_operational": 3, 00:14:41.410 "base_bdevs_list": [ 00:14:41.410 { 00:14:41.410 "name": null, 00:14:41.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.410 "is_configured": false, 00:14:41.410 "data_offset": 0, 00:14:41.410 "data_size": 63488 00:14:41.410 }, 00:14:41.410 { 00:14:41.410 "name": "BaseBdev2", 00:14:41.410 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:41.410 "is_configured": true, 00:14:41.410 "data_offset": 2048, 00:14:41.410 "data_size": 63488 00:14:41.410 }, 00:14:41.410 { 00:14:41.410 "name": "BaseBdev3", 00:14:41.410 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:41.410 "is_configured": true, 00:14:41.410 "data_offset": 2048, 00:14:41.410 "data_size": 63488 00:14:41.410 }, 00:14:41.410 { 00:14:41.410 "name": "BaseBdev4", 00:14:41.410 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:41.410 "is_configured": true, 00:14:41.410 "data_offset": 2048, 00:14:41.410 "data_size": 63488 00:14:41.410 } 00:14:41.410 ] 00:14:41.410 }' 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.410 11:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.978 "name": "raid_bdev1", 00:14:41.978 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:41.978 "strip_size_kb": 0, 00:14:41.978 "state": "online", 00:14:41.978 "raid_level": "raid1", 00:14:41.978 "superblock": true, 00:14:41.978 "num_base_bdevs": 4, 00:14:41.978 "num_base_bdevs_discovered": 3, 00:14:41.978 "num_base_bdevs_operational": 3, 00:14:41.978 "base_bdevs_list": [ 00:14:41.978 { 00:14:41.978 "name": null, 00:14:41.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.978 "is_configured": false, 00:14:41.978 "data_offset": 0, 00:14:41.978 "data_size": 63488 00:14:41.978 }, 00:14:41.978 { 00:14:41.978 "name": "BaseBdev2", 00:14:41.978 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:41.978 "is_configured": true, 00:14:41.978 "data_offset": 2048, 00:14:41.978 "data_size": 63488 00:14:41.978 }, 00:14:41.978 { 00:14:41.978 "name": "BaseBdev3", 00:14:41.978 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:41.978 "is_configured": true, 00:14:41.978 "data_offset": 2048, 00:14:41.978 "data_size": 63488 00:14:41.978 }, 00:14:41.978 { 00:14:41.978 "name": "BaseBdev4", 00:14:41.978 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:41.978 "is_configured": true, 00:14:41.978 "data_offset": 2048, 00:14:41.978 "data_size": 63488 00:14:41.978 } 00:14:41.978 ] 00:14:41.978 }' 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.978 [2024-11-04 11:47:07.397477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.978 [2024-11-04 11:47:07.413201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.978 11:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:41.978 [2024-11-04 11:47:07.415356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.915 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.175 "name": "raid_bdev1", 00:14:43.175 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:43.175 "strip_size_kb": 0, 00:14:43.175 "state": "online", 00:14:43.175 "raid_level": "raid1", 00:14:43.175 "superblock": true, 00:14:43.175 "num_base_bdevs": 4, 00:14:43.175 "num_base_bdevs_discovered": 4, 00:14:43.175 "num_base_bdevs_operational": 4, 00:14:43.175 "process": { 00:14:43.175 "type": "rebuild", 00:14:43.175 "target": "spare", 00:14:43.175 "progress": { 00:14:43.175 "blocks": 20480, 00:14:43.175 "percent": 32 00:14:43.175 } 00:14:43.175 }, 00:14:43.175 "base_bdevs_list": [ 00:14:43.175 { 00:14:43.175 "name": "spare", 00:14:43.175 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:43.175 "is_configured": true, 00:14:43.175 "data_offset": 2048, 00:14:43.175 "data_size": 63488 00:14:43.175 }, 00:14:43.175 { 00:14:43.175 "name": "BaseBdev2", 00:14:43.175 "uuid": "3668ca71-9f39-5aa7-bdc9-b8016e432b33", 00:14:43.175 "is_configured": true, 00:14:43.175 "data_offset": 2048, 00:14:43.175 "data_size": 63488 00:14:43.175 }, 00:14:43.175 { 00:14:43.175 "name": "BaseBdev3", 00:14:43.175 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:43.175 "is_configured": true, 00:14:43.175 "data_offset": 2048, 00:14:43.175 "data_size": 63488 00:14:43.175 }, 00:14:43.175 { 00:14:43.175 "name": "BaseBdev4", 00:14:43.175 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:43.175 "is_configured": true, 00:14:43.175 "data_offset": 2048, 00:14:43.175 "data_size": 63488 00:14:43.175 } 00:14:43.175 ] 00:14:43.175 }' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:43.175 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.175 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.175 [2024-11-04 11:47:08.562845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.434 [2024-11-04 11:47:08.721124] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.434 "name": "raid_bdev1", 00:14:43.434 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:43.434 "strip_size_kb": 0, 00:14:43.434 "state": "online", 00:14:43.434 "raid_level": "raid1", 00:14:43.434 "superblock": true, 00:14:43.434 "num_base_bdevs": 4, 00:14:43.434 "num_base_bdevs_discovered": 3, 00:14:43.434 "num_base_bdevs_operational": 3, 00:14:43.434 "process": { 00:14:43.434 "type": "rebuild", 00:14:43.434 "target": "spare", 00:14:43.434 "progress": { 00:14:43.434 "blocks": 24576, 00:14:43.434 "percent": 38 00:14:43.434 } 00:14:43.434 }, 00:14:43.434 "base_bdevs_list": [ 00:14:43.434 { 00:14:43.434 "name": "spare", 00:14:43.434 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:43.434 "is_configured": true, 00:14:43.434 "data_offset": 2048, 00:14:43.434 "data_size": 63488 00:14:43.434 }, 00:14:43.434 { 00:14:43.434 "name": null, 00:14:43.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.434 "is_configured": false, 00:14:43.434 "data_offset": 0, 00:14:43.434 "data_size": 63488 00:14:43.434 }, 00:14:43.434 { 00:14:43.434 "name": "BaseBdev3", 00:14:43.434 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:43.434 "is_configured": true, 00:14:43.434 "data_offset": 2048, 00:14:43.434 "data_size": 63488 00:14:43.434 }, 00:14:43.434 { 00:14:43.434 "name": "BaseBdev4", 00:14:43.434 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:43.434 "is_configured": true, 00:14:43.434 "data_offset": 2048, 00:14:43.434 "data_size": 63488 00:14:43.434 } 00:14:43.434 ] 00:14:43.434 }' 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.434 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.435 "name": "raid_bdev1", 00:14:43.435 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:43.435 "strip_size_kb": 0, 00:14:43.435 "state": "online", 00:14:43.435 "raid_level": "raid1", 00:14:43.435 "superblock": true, 00:14:43.435 "num_base_bdevs": 4, 00:14:43.435 "num_base_bdevs_discovered": 3, 00:14:43.435 "num_base_bdevs_operational": 3, 00:14:43.435 "process": { 00:14:43.435 "type": "rebuild", 00:14:43.435 "target": "spare", 00:14:43.435 "progress": { 00:14:43.435 "blocks": 26624, 00:14:43.435 "percent": 41 00:14:43.435 } 00:14:43.435 }, 00:14:43.435 "base_bdevs_list": [ 00:14:43.435 { 00:14:43.435 "name": "spare", 00:14:43.435 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:43.435 "is_configured": true, 00:14:43.435 "data_offset": 2048, 00:14:43.435 "data_size": 63488 00:14:43.435 }, 00:14:43.435 { 00:14:43.435 "name": null, 00:14:43.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.435 "is_configured": false, 00:14:43.435 "data_offset": 0, 00:14:43.435 "data_size": 63488 00:14:43.435 }, 00:14:43.435 { 00:14:43.435 "name": "BaseBdev3", 00:14:43.435 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:43.435 "is_configured": true, 00:14:43.435 "data_offset": 2048, 00:14:43.435 "data_size": 63488 00:14:43.435 }, 00:14:43.435 { 00:14:43.435 "name": "BaseBdev4", 00:14:43.435 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:43.435 "is_configured": true, 00:14:43.435 "data_offset": 2048, 00:14:43.435 "data_size": 63488 00:14:43.435 } 00:14:43.435 ] 00:14:43.435 }' 00:14:43.435 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.693 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.693 11:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.693 11:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.693 11:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.687 "name": "raid_bdev1", 00:14:44.687 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:44.687 "strip_size_kb": 0, 00:14:44.687 "state": "online", 00:14:44.687 "raid_level": "raid1", 00:14:44.687 "superblock": true, 00:14:44.687 "num_base_bdevs": 4, 00:14:44.687 "num_base_bdevs_discovered": 3, 00:14:44.687 "num_base_bdevs_operational": 3, 00:14:44.687 "process": { 00:14:44.687 "type": "rebuild", 00:14:44.687 "target": "spare", 00:14:44.687 "progress": { 00:14:44.687 "blocks": 49152, 00:14:44.687 "percent": 77 00:14:44.687 } 00:14:44.687 }, 00:14:44.687 "base_bdevs_list": [ 00:14:44.687 { 00:14:44.687 "name": "spare", 00:14:44.687 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:44.687 "is_configured": true, 00:14:44.687 "data_offset": 2048, 00:14:44.687 "data_size": 63488 00:14:44.687 }, 00:14:44.687 { 00:14:44.687 "name": null, 00:14:44.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.687 "is_configured": false, 00:14:44.687 "data_offset": 0, 00:14:44.687 "data_size": 63488 00:14:44.687 }, 00:14:44.687 { 00:14:44.687 "name": "BaseBdev3", 00:14:44.687 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:44.687 "is_configured": true, 00:14:44.687 "data_offset": 2048, 00:14:44.687 "data_size": 63488 00:14:44.687 }, 00:14:44.687 { 00:14:44.687 "name": "BaseBdev4", 00:14:44.687 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:44.687 "is_configured": true, 00:14:44.687 "data_offset": 2048, 00:14:44.687 "data_size": 63488 00:14:44.687 } 00:14:44.687 ] 00:14:44.687 }' 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.687 11:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.256 [2024-11-04 11:47:10.630227] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.256 [2024-11-04 11:47:10.630314] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.256 [2024-11-04 11:47:10.630492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.824 "name": "raid_bdev1", 00:14:45.824 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:45.824 "strip_size_kb": 0, 00:14:45.824 "state": "online", 00:14:45.824 "raid_level": "raid1", 00:14:45.824 "superblock": true, 00:14:45.824 "num_base_bdevs": 4, 00:14:45.824 "num_base_bdevs_discovered": 3, 00:14:45.824 "num_base_bdevs_operational": 3, 00:14:45.824 "base_bdevs_list": [ 00:14:45.824 { 00:14:45.824 "name": "spare", 00:14:45.824 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:45.824 "is_configured": true, 00:14:45.824 "data_offset": 2048, 00:14:45.824 "data_size": 63488 00:14:45.824 }, 00:14:45.824 { 00:14:45.824 "name": null, 00:14:45.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.824 "is_configured": false, 00:14:45.824 "data_offset": 0, 00:14:45.824 "data_size": 63488 00:14:45.824 }, 00:14:45.824 { 00:14:45.824 "name": "BaseBdev3", 00:14:45.824 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:45.824 "is_configured": true, 00:14:45.824 "data_offset": 2048, 00:14:45.824 "data_size": 63488 00:14:45.824 }, 00:14:45.824 { 00:14:45.824 "name": "BaseBdev4", 00:14:45.824 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:45.824 "is_configured": true, 00:14:45.824 "data_offset": 2048, 00:14:45.824 "data_size": 63488 00:14:45.824 } 00:14:45.824 ] 00:14:45.824 }' 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.824 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.824 "name": "raid_bdev1", 00:14:45.824 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:45.824 "strip_size_kb": 0, 00:14:45.825 "state": "online", 00:14:45.825 "raid_level": "raid1", 00:14:45.825 "superblock": true, 00:14:45.825 "num_base_bdevs": 4, 00:14:45.825 "num_base_bdevs_discovered": 3, 00:14:45.825 "num_base_bdevs_operational": 3, 00:14:45.825 "base_bdevs_list": [ 00:14:45.825 { 00:14:45.825 "name": "spare", 00:14:45.825 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:45.825 "is_configured": true, 00:14:45.825 "data_offset": 2048, 00:14:45.825 "data_size": 63488 00:14:45.825 }, 00:14:45.825 { 00:14:45.825 "name": null, 00:14:45.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.825 "is_configured": false, 00:14:45.825 "data_offset": 0, 00:14:45.825 "data_size": 63488 00:14:45.825 }, 00:14:45.825 { 00:14:45.825 "name": "BaseBdev3", 00:14:45.825 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:45.825 "is_configured": true, 00:14:45.825 "data_offset": 2048, 00:14:45.825 "data_size": 63488 00:14:45.825 }, 00:14:45.825 { 00:14:45.825 "name": "BaseBdev4", 00:14:45.825 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:45.825 "is_configured": true, 00:14:45.825 "data_offset": 2048, 00:14:45.825 "data_size": 63488 00:14:45.825 } 00:14:45.825 ] 00:14:45.825 }' 00:14:45.825 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.084 "name": "raid_bdev1", 00:14:46.084 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:46.084 "strip_size_kb": 0, 00:14:46.084 "state": "online", 00:14:46.084 "raid_level": "raid1", 00:14:46.084 "superblock": true, 00:14:46.084 "num_base_bdevs": 4, 00:14:46.084 "num_base_bdevs_discovered": 3, 00:14:46.084 "num_base_bdevs_operational": 3, 00:14:46.084 "base_bdevs_list": [ 00:14:46.084 { 00:14:46.084 "name": "spare", 00:14:46.084 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:46.084 "is_configured": true, 00:14:46.084 "data_offset": 2048, 00:14:46.084 "data_size": 63488 00:14:46.084 }, 00:14:46.084 { 00:14:46.084 "name": null, 00:14:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.084 "is_configured": false, 00:14:46.084 "data_offset": 0, 00:14:46.084 "data_size": 63488 00:14:46.084 }, 00:14:46.084 { 00:14:46.084 "name": "BaseBdev3", 00:14:46.084 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:46.084 "is_configured": true, 00:14:46.084 "data_offset": 2048, 00:14:46.084 "data_size": 63488 00:14:46.084 }, 00:14:46.084 { 00:14:46.084 "name": "BaseBdev4", 00:14:46.084 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:46.084 "is_configured": true, 00:14:46.084 "data_offset": 2048, 00:14:46.084 "data_size": 63488 00:14:46.084 } 00:14:46.084 ] 00:14:46.084 }' 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.084 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.343 [2024-11-04 11:47:11.812253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.343 [2024-11-04 11:47:11.812335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.343 [2024-11-04 11:47:11.812471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.343 [2024-11-04 11:47:11.812634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.343 [2024-11-04 11:47:11.812689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.343 11:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:46.601 /dev/nbd0 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.860 1+0 records in 00:14:46.860 1+0 records out 00:14:46.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321434 s, 12.7 MB/s 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:46.860 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:46.860 /dev/nbd1 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.118 1+0 records in 00:14:47.118 1+0 records out 00:14:47.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335918 s, 12.2 MB/s 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.118 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.377 11:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.636 [2024-11-04 11:47:13.085709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.636 [2024-11-04 11:47:13.085851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.636 [2024-11-04 11:47:13.085902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:47.636 [2024-11-04 11:47:13.085958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.636 [2024-11-04 11:47:13.088514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.636 [2024-11-04 11:47:13.088601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.636 [2024-11-04 11:47:13.088774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:47.636 [2024-11-04 11:47:13.088877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.636 [2024-11-04 11:47:13.089097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.636 [2024-11-04 11:47:13.089275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.636 spare 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.636 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.895 [2024-11-04 11:47:13.189261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:47.895 [2024-11-04 11:47:13.189392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:47.895 [2024-11-04 11:47:13.189877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:47.895 [2024-11-04 11:47:13.190164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:47.895 [2024-11-04 11:47:13.190221] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:47.895 [2024-11-04 11:47:13.190522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.895 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.896 "name": "raid_bdev1", 00:14:47.896 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:47.896 "strip_size_kb": 0, 00:14:47.896 "state": "online", 00:14:47.896 "raid_level": "raid1", 00:14:47.896 "superblock": true, 00:14:47.896 "num_base_bdevs": 4, 00:14:47.896 "num_base_bdevs_discovered": 3, 00:14:47.896 "num_base_bdevs_operational": 3, 00:14:47.896 "base_bdevs_list": [ 00:14:47.896 { 00:14:47.896 "name": "spare", 00:14:47.896 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:47.896 "is_configured": true, 00:14:47.896 "data_offset": 2048, 00:14:47.896 "data_size": 63488 00:14:47.896 }, 00:14:47.896 { 00:14:47.896 "name": null, 00:14:47.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.896 "is_configured": false, 00:14:47.896 "data_offset": 2048, 00:14:47.896 "data_size": 63488 00:14:47.896 }, 00:14:47.896 { 00:14:47.896 "name": "BaseBdev3", 00:14:47.896 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:47.896 "is_configured": true, 00:14:47.896 "data_offset": 2048, 00:14:47.896 "data_size": 63488 00:14:47.896 }, 00:14:47.896 { 00:14:47.896 "name": "BaseBdev4", 00:14:47.896 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:47.896 "is_configured": true, 00:14:47.896 "data_offset": 2048, 00:14:47.896 "data_size": 63488 00:14:47.896 } 00:14:47.896 ] 00:14:47.896 }' 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.896 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.154 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.154 "name": "raid_bdev1", 00:14:48.154 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:48.154 "strip_size_kb": 0, 00:14:48.154 "state": "online", 00:14:48.154 "raid_level": "raid1", 00:14:48.154 "superblock": true, 00:14:48.154 "num_base_bdevs": 4, 00:14:48.154 "num_base_bdevs_discovered": 3, 00:14:48.154 "num_base_bdevs_operational": 3, 00:14:48.154 "base_bdevs_list": [ 00:14:48.154 { 00:14:48.154 "name": "spare", 00:14:48.154 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:48.154 "is_configured": true, 00:14:48.154 "data_offset": 2048, 00:14:48.154 "data_size": 63488 00:14:48.154 }, 00:14:48.154 { 00:14:48.154 "name": null, 00:14:48.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.154 "is_configured": false, 00:14:48.154 "data_offset": 2048, 00:14:48.154 "data_size": 63488 00:14:48.154 }, 00:14:48.154 { 00:14:48.155 "name": "BaseBdev3", 00:14:48.155 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:48.155 "is_configured": true, 00:14:48.155 "data_offset": 2048, 00:14:48.155 "data_size": 63488 00:14:48.155 }, 00:14:48.155 { 00:14:48.155 "name": "BaseBdev4", 00:14:48.155 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:48.155 "is_configured": true, 00:14:48.155 "data_offset": 2048, 00:14:48.155 "data_size": 63488 00:14:48.155 } 00:14:48.155 ] 00:14:48.155 }' 00:14:48.155 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.418 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 [2024-11-04 11:47:13.817402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.419 "name": "raid_bdev1", 00:14:48.419 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:48.419 "strip_size_kb": 0, 00:14:48.419 "state": "online", 00:14:48.419 "raid_level": "raid1", 00:14:48.419 "superblock": true, 00:14:48.419 "num_base_bdevs": 4, 00:14:48.419 "num_base_bdevs_discovered": 2, 00:14:48.419 "num_base_bdevs_operational": 2, 00:14:48.419 "base_bdevs_list": [ 00:14:48.419 { 00:14:48.419 "name": null, 00:14:48.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.419 "is_configured": false, 00:14:48.419 "data_offset": 0, 00:14:48.419 "data_size": 63488 00:14:48.419 }, 00:14:48.419 { 00:14:48.419 "name": null, 00:14:48.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.419 "is_configured": false, 00:14:48.419 "data_offset": 2048, 00:14:48.419 "data_size": 63488 00:14:48.419 }, 00:14:48.419 { 00:14:48.419 "name": "BaseBdev3", 00:14:48.419 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:48.419 "is_configured": true, 00:14:48.419 "data_offset": 2048, 00:14:48.419 "data_size": 63488 00:14:48.419 }, 00:14:48.419 { 00:14:48.419 "name": "BaseBdev4", 00:14:48.419 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:48.419 "is_configured": true, 00:14:48.419 "data_offset": 2048, 00:14:48.419 "data_size": 63488 00:14:48.419 } 00:14:48.419 ] 00:14:48.419 }' 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.419 11:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.994 11:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:48.994 11:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.995 11:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.995 [2024-11-04 11:47:14.252662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.995 [2024-11-04 11:47:14.252947] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:48.995 [2024-11-04 11:47:14.252969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:48.995 [2024-11-04 11:47:14.253016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.995 [2024-11-04 11:47:14.268058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:48.995 11:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.995 11:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:48.995 [2024-11-04 11:47:14.270120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.932 "name": "raid_bdev1", 00:14:49.932 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:49.932 "strip_size_kb": 0, 00:14:49.932 "state": "online", 00:14:49.932 "raid_level": "raid1", 00:14:49.932 "superblock": true, 00:14:49.932 "num_base_bdevs": 4, 00:14:49.932 "num_base_bdevs_discovered": 3, 00:14:49.932 "num_base_bdevs_operational": 3, 00:14:49.932 "process": { 00:14:49.932 "type": "rebuild", 00:14:49.932 "target": "spare", 00:14:49.932 "progress": { 00:14:49.932 "blocks": 20480, 00:14:49.932 "percent": 32 00:14:49.932 } 00:14:49.932 }, 00:14:49.932 "base_bdevs_list": [ 00:14:49.932 { 00:14:49.932 "name": "spare", 00:14:49.932 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:49.932 "is_configured": true, 00:14:49.932 "data_offset": 2048, 00:14:49.932 "data_size": 63488 00:14:49.932 }, 00:14:49.932 { 00:14:49.932 "name": null, 00:14:49.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.932 "is_configured": false, 00:14:49.932 "data_offset": 2048, 00:14:49.932 "data_size": 63488 00:14:49.932 }, 00:14:49.932 { 00:14:49.932 "name": "BaseBdev3", 00:14:49.932 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:49.932 "is_configured": true, 00:14:49.932 "data_offset": 2048, 00:14:49.932 "data_size": 63488 00:14:49.932 }, 00:14:49.932 { 00:14:49.932 "name": "BaseBdev4", 00:14:49.932 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:49.932 "is_configured": true, 00:14:49.932 "data_offset": 2048, 00:14:49.932 "data_size": 63488 00:14:49.932 } 00:14:49.932 ] 00:14:49.932 }' 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.932 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.932 [2024-11-04 11:47:15.425404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.191 [2024-11-04 11:47:15.475536] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.191 [2024-11-04 11:47:15.475653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.191 [2024-11-04 11:47:15.475724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.191 [2024-11-04 11:47:15.475756] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.191 "name": "raid_bdev1", 00:14:50.191 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:50.191 "strip_size_kb": 0, 00:14:50.191 "state": "online", 00:14:50.191 "raid_level": "raid1", 00:14:50.191 "superblock": true, 00:14:50.191 "num_base_bdevs": 4, 00:14:50.191 "num_base_bdevs_discovered": 2, 00:14:50.191 "num_base_bdevs_operational": 2, 00:14:50.191 "base_bdevs_list": [ 00:14:50.191 { 00:14:50.191 "name": null, 00:14:50.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.191 "is_configured": false, 00:14:50.191 "data_offset": 0, 00:14:50.191 "data_size": 63488 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "name": null, 00:14:50.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.191 "is_configured": false, 00:14:50.191 "data_offset": 2048, 00:14:50.191 "data_size": 63488 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "name": "BaseBdev3", 00:14:50.191 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:50.191 "is_configured": true, 00:14:50.191 "data_offset": 2048, 00:14:50.191 "data_size": 63488 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "name": "BaseBdev4", 00:14:50.191 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:50.191 "is_configured": true, 00:14:50.191 "data_offset": 2048, 00:14:50.191 "data_size": 63488 00:14:50.191 } 00:14:50.191 ] 00:14:50.191 }' 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.191 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.450 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.450 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.450 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.450 [2024-11-04 11:47:15.885430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.450 [2024-11-04 11:47:15.885561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.450 [2024-11-04 11:47:15.885615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:50.450 [2024-11-04 11:47:15.885659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.450 [2024-11-04 11:47:15.886174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.450 [2024-11-04 11:47:15.886241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.450 [2024-11-04 11:47:15.886407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:50.450 [2024-11-04 11:47:15.886451] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.450 [2024-11-04 11:47:15.886517] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.450 [2024-11-04 11:47:15.886589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.450 [2024-11-04 11:47:15.901322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:50.450 spare 00:14:50.450 11:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.450 11:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:50.450 [2024-11-04 11:47:15.903199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.389 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.389 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.389 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.389 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.389 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.649 "name": "raid_bdev1", 00:14:51.649 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:51.649 "strip_size_kb": 0, 00:14:51.649 "state": "online", 00:14:51.649 "raid_level": "raid1", 00:14:51.649 "superblock": true, 00:14:51.649 "num_base_bdevs": 4, 00:14:51.649 "num_base_bdevs_discovered": 3, 00:14:51.649 "num_base_bdevs_operational": 3, 00:14:51.649 "process": { 00:14:51.649 "type": "rebuild", 00:14:51.649 "target": "spare", 00:14:51.649 "progress": { 00:14:51.649 "blocks": 20480, 00:14:51.649 "percent": 32 00:14:51.649 } 00:14:51.649 }, 00:14:51.649 "base_bdevs_list": [ 00:14:51.649 { 00:14:51.649 "name": "spare", 00:14:51.649 "uuid": "50cce3d6-01a6-5607-8ac7-1704fc358b82", 00:14:51.649 "is_configured": true, 00:14:51.649 "data_offset": 2048, 00:14:51.649 "data_size": 63488 00:14:51.649 }, 00:14:51.649 { 00:14:51.649 "name": null, 00:14:51.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.649 "is_configured": false, 00:14:51.649 "data_offset": 2048, 00:14:51.649 "data_size": 63488 00:14:51.649 }, 00:14:51.649 { 00:14:51.649 "name": "BaseBdev3", 00:14:51.649 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:51.649 "is_configured": true, 00:14:51.649 "data_offset": 2048, 00:14:51.649 "data_size": 63488 00:14:51.649 }, 00:14:51.649 { 00:14:51.649 "name": "BaseBdev4", 00:14:51.649 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:51.649 "is_configured": true, 00:14:51.649 "data_offset": 2048, 00:14:51.649 "data_size": 63488 00:14:51.649 } 00:14:51.649 ] 00:14:51.649 }' 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.649 11:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.649 [2024-11-04 11:47:17.042519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.649 [2024-11-04 11:47:17.108653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:51.649 [2024-11-04 11:47:17.108854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.649 [2024-11-04 11:47:17.108895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.649 [2024-11-04 11:47:17.108920] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.649 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.908 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.908 "name": "raid_bdev1", 00:14:51.908 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:51.908 "strip_size_kb": 0, 00:14:51.908 "state": "online", 00:14:51.908 "raid_level": "raid1", 00:14:51.908 "superblock": true, 00:14:51.908 "num_base_bdevs": 4, 00:14:51.908 "num_base_bdevs_discovered": 2, 00:14:51.908 "num_base_bdevs_operational": 2, 00:14:51.908 "base_bdevs_list": [ 00:14:51.908 { 00:14:51.908 "name": null, 00:14:51.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.908 "is_configured": false, 00:14:51.908 "data_offset": 0, 00:14:51.908 "data_size": 63488 00:14:51.908 }, 00:14:51.908 { 00:14:51.908 "name": null, 00:14:51.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.908 "is_configured": false, 00:14:51.908 "data_offset": 2048, 00:14:51.908 "data_size": 63488 00:14:51.908 }, 00:14:51.908 { 00:14:51.908 "name": "BaseBdev3", 00:14:51.909 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:51.909 "is_configured": true, 00:14:51.909 "data_offset": 2048, 00:14:51.909 "data_size": 63488 00:14:51.909 }, 00:14:51.909 { 00:14:51.909 "name": "BaseBdev4", 00:14:51.909 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:51.909 "is_configured": true, 00:14:51.909 "data_offset": 2048, 00:14:51.909 "data_size": 63488 00:14:51.909 } 00:14:51.909 ] 00:14:51.909 }' 00:14:51.909 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.909 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.167 "name": "raid_bdev1", 00:14:52.167 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:52.167 "strip_size_kb": 0, 00:14:52.167 "state": "online", 00:14:52.167 "raid_level": "raid1", 00:14:52.167 "superblock": true, 00:14:52.167 "num_base_bdevs": 4, 00:14:52.167 "num_base_bdevs_discovered": 2, 00:14:52.167 "num_base_bdevs_operational": 2, 00:14:52.167 "base_bdevs_list": [ 00:14:52.167 { 00:14:52.167 "name": null, 00:14:52.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.167 "is_configured": false, 00:14:52.167 "data_offset": 0, 00:14:52.167 "data_size": 63488 00:14:52.167 }, 00:14:52.167 { 00:14:52.167 "name": null, 00:14:52.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.167 "is_configured": false, 00:14:52.167 "data_offset": 2048, 00:14:52.167 "data_size": 63488 00:14:52.167 }, 00:14:52.167 { 00:14:52.167 "name": "BaseBdev3", 00:14:52.167 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:52.167 "is_configured": true, 00:14:52.167 "data_offset": 2048, 00:14:52.167 "data_size": 63488 00:14:52.167 }, 00:14:52.167 { 00:14:52.167 "name": "BaseBdev4", 00:14:52.167 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:52.167 "is_configured": true, 00:14:52.167 "data_offset": 2048, 00:14:52.167 "data_size": 63488 00:14:52.167 } 00:14:52.167 ] 00:14:52.167 }' 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.167 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.167 [2024-11-04 11:47:17.682023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.167 [2024-11-04 11:47:17.682135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.167 [2024-11-04 11:47:17.682180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:52.167 [2024-11-04 11:47:17.682233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.167 [2024-11-04 11:47:17.682772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.167 [2024-11-04 11:47:17.682833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.168 [2024-11-04 11:47:17.682957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:52.168 [2024-11-04 11:47:17.682978] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:52.168 [2024-11-04 11:47:17.682986] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.168 [2024-11-04 11:47:17.683011] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:52.168 BaseBdev1 00:14:52.168 11:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.168 11:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.549 11:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.550 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.550 "name": "raid_bdev1", 00:14:53.550 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:53.550 "strip_size_kb": 0, 00:14:53.550 "state": "online", 00:14:53.550 "raid_level": "raid1", 00:14:53.550 "superblock": true, 00:14:53.550 "num_base_bdevs": 4, 00:14:53.550 "num_base_bdevs_discovered": 2, 00:14:53.550 "num_base_bdevs_operational": 2, 00:14:53.550 "base_bdevs_list": [ 00:14:53.550 { 00:14:53.550 "name": null, 00:14:53.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.550 "is_configured": false, 00:14:53.550 "data_offset": 0, 00:14:53.550 "data_size": 63488 00:14:53.550 }, 00:14:53.550 { 00:14:53.550 "name": null, 00:14:53.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.550 "is_configured": false, 00:14:53.550 "data_offset": 2048, 00:14:53.550 "data_size": 63488 00:14:53.550 }, 00:14:53.550 { 00:14:53.550 "name": "BaseBdev3", 00:14:53.550 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:53.550 "is_configured": true, 00:14:53.550 "data_offset": 2048, 00:14:53.550 "data_size": 63488 00:14:53.550 }, 00:14:53.550 { 00:14:53.550 "name": "BaseBdev4", 00:14:53.550 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:53.550 "is_configured": true, 00:14:53.550 "data_offset": 2048, 00:14:53.550 "data_size": 63488 00:14:53.550 } 00:14:53.550 ] 00:14:53.550 }' 00:14:53.550 11:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.550 11:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.809 "name": "raid_bdev1", 00:14:53.809 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:53.809 "strip_size_kb": 0, 00:14:53.809 "state": "online", 00:14:53.809 "raid_level": "raid1", 00:14:53.809 "superblock": true, 00:14:53.809 "num_base_bdevs": 4, 00:14:53.809 "num_base_bdevs_discovered": 2, 00:14:53.809 "num_base_bdevs_operational": 2, 00:14:53.809 "base_bdevs_list": [ 00:14:53.809 { 00:14:53.809 "name": null, 00:14:53.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.809 "is_configured": false, 00:14:53.809 "data_offset": 0, 00:14:53.809 "data_size": 63488 00:14:53.809 }, 00:14:53.809 { 00:14:53.809 "name": null, 00:14:53.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.809 "is_configured": false, 00:14:53.809 "data_offset": 2048, 00:14:53.809 "data_size": 63488 00:14:53.809 }, 00:14:53.809 { 00:14:53.809 "name": "BaseBdev3", 00:14:53.809 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:53.809 "is_configured": true, 00:14:53.809 "data_offset": 2048, 00:14:53.809 "data_size": 63488 00:14:53.809 }, 00:14:53.809 { 00:14:53.809 "name": "BaseBdev4", 00:14:53.809 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:53.809 "is_configured": true, 00:14:53.809 "data_offset": 2048, 00:14:53.809 "data_size": 63488 00:14:53.809 } 00:14:53.809 ] 00:14:53.809 }' 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.809 [2024-11-04 11:47:19.299328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.809 [2024-11-04 11:47:19.299614] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:53.809 [2024-11-04 11:47:19.299682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:53.809 request: 00:14:53.809 { 00:14:53.809 "base_bdev": "BaseBdev1", 00:14:53.809 "raid_bdev": "raid_bdev1", 00:14:53.809 "method": "bdev_raid_add_base_bdev", 00:14:53.809 "req_id": 1 00:14:53.809 } 00:14:53.809 Got JSON-RPC error response 00:14:53.809 response: 00:14:53.809 { 00:14:53.809 "code": -22, 00:14:53.809 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:53.809 } 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.809 11:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.186 "name": "raid_bdev1", 00:14:55.186 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:55.186 "strip_size_kb": 0, 00:14:55.186 "state": "online", 00:14:55.186 "raid_level": "raid1", 00:14:55.186 "superblock": true, 00:14:55.186 "num_base_bdevs": 4, 00:14:55.186 "num_base_bdevs_discovered": 2, 00:14:55.186 "num_base_bdevs_operational": 2, 00:14:55.186 "base_bdevs_list": [ 00:14:55.186 { 00:14:55.186 "name": null, 00:14:55.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.186 "is_configured": false, 00:14:55.186 "data_offset": 0, 00:14:55.186 "data_size": 63488 00:14:55.186 }, 00:14:55.186 { 00:14:55.186 "name": null, 00:14:55.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.186 "is_configured": false, 00:14:55.186 "data_offset": 2048, 00:14:55.186 "data_size": 63488 00:14:55.186 }, 00:14:55.186 { 00:14:55.186 "name": "BaseBdev3", 00:14:55.186 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:55.186 "is_configured": true, 00:14:55.186 "data_offset": 2048, 00:14:55.186 "data_size": 63488 00:14:55.186 }, 00:14:55.186 { 00:14:55.186 "name": "BaseBdev4", 00:14:55.186 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:55.186 "is_configured": true, 00:14:55.186 "data_offset": 2048, 00:14:55.186 "data_size": 63488 00:14:55.186 } 00:14:55.186 ] 00:14:55.186 }' 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.186 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.446 "name": "raid_bdev1", 00:14:55.446 "uuid": "6c3a51b0-38fa-4708-a259-4a409c4e7e09", 00:14:55.446 "strip_size_kb": 0, 00:14:55.446 "state": "online", 00:14:55.446 "raid_level": "raid1", 00:14:55.446 "superblock": true, 00:14:55.446 "num_base_bdevs": 4, 00:14:55.446 "num_base_bdevs_discovered": 2, 00:14:55.446 "num_base_bdevs_operational": 2, 00:14:55.446 "base_bdevs_list": [ 00:14:55.446 { 00:14:55.446 "name": null, 00:14:55.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.446 "is_configured": false, 00:14:55.446 "data_offset": 0, 00:14:55.446 "data_size": 63488 00:14:55.446 }, 00:14:55.446 { 00:14:55.446 "name": null, 00:14:55.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.446 "is_configured": false, 00:14:55.446 "data_offset": 2048, 00:14:55.446 "data_size": 63488 00:14:55.446 }, 00:14:55.446 { 00:14:55.446 "name": "BaseBdev3", 00:14:55.446 "uuid": "316d4aa0-c08d-5265-8c51-29dbeb667715", 00:14:55.446 "is_configured": true, 00:14:55.446 "data_offset": 2048, 00:14:55.446 "data_size": 63488 00:14:55.446 }, 00:14:55.446 { 00:14:55.446 "name": "BaseBdev4", 00:14:55.446 "uuid": "2864c8ba-17ff-52f3-a46d-c0d733dfad14", 00:14:55.446 "is_configured": true, 00:14:55.446 "data_offset": 2048, 00:14:55.446 "data_size": 63488 00:14:55.446 } 00:14:55.446 ] 00:14:55.446 }' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78238 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78238 ']' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78238 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78238 00:14:55.446 killing process with pid 78238 00:14:55.446 Received shutdown signal, test time was about 60.000000 seconds 00:14:55.446 00:14:55.446 Latency(us) 00:14:55.446 [2024-11-04T11:47:20.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.446 [2024-11-04T11:47:20.968Z] =================================================================================================================== 00:14:55.446 [2024-11-04T11:47:20.968Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78238' 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78238 00:14:55.446 [2024-11-04 11:47:20.944873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.446 [2024-11-04 11:47:20.945004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.446 11:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78238 00:14:55.446 [2024-11-04 11:47:20.945078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.446 [2024-11-04 11:47:20.945089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:56.014 [2024-11-04 11:47:21.464424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.430 ************************************ 00:14:57.430 END TEST raid_rebuild_test_sb 00:14:57.430 ************************************ 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.430 00:14:57.430 real 0m25.278s 00:14:57.430 user 0m30.591s 00:14:57.430 sys 0m3.767s 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.430 11:47:22 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:57.430 11:47:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:57.430 11:47:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:57.430 11:47:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.430 ************************************ 00:14:57.430 START TEST raid_rebuild_test_io 00:14:57.430 ************************************ 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:57.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78992 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78992 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78992 ']' 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:57.430 11:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.430 [2024-11-04 11:47:22.765761] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:14:57.430 [2024-11-04 11:47:22.765960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.430 Zero copy mechanism will not be used. 00:14:57.430 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78992 ] 00:14:57.430 [2024-11-04 11:47:22.938507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.688 [2024-11-04 11:47:23.054246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.947 [2024-11-04 11:47:23.264909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.947 [2024-11-04 11:47:23.265054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.205 BaseBdev1_malloc 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.205 [2024-11-04 11:47:23.703858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.205 [2024-11-04 11:47:23.704010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.205 [2024-11-04 11:47:23.704058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.205 [2024-11-04 11:47:23.704100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.205 [2024-11-04 11:47:23.706455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.205 [2024-11-04 11:47:23.706533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.205 BaseBdev1 00:14:58.205 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.206 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.206 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.206 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.206 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 BaseBdev2_malloc 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 [2024-11-04 11:47:23.761520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.464 [2024-11-04 11:47:23.761652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.464 [2024-11-04 11:47:23.761691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:58.464 [2024-11-04 11:47:23.761705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.464 [2024-11-04 11:47:23.763806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.464 [2024-11-04 11:47:23.763844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.464 BaseBdev2 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 BaseBdev3_malloc 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 [2024-11-04 11:47:23.833021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.464 [2024-11-04 11:47:23.833126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.464 [2024-11-04 11:47:23.833168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:58.464 [2024-11-04 11:47:23.833198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.464 [2024-11-04 11:47:23.835492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.464 [2024-11-04 11:47:23.835583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.464 BaseBdev3 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 BaseBdev4_malloc 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.464 [2024-11-04 11:47:23.888119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:58.464 [2024-11-04 11:47:23.888236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.464 [2024-11-04 11:47:23.888272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:58.464 [2024-11-04 11:47:23.888303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.464 [2024-11-04 11:47:23.890487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.464 [2024-11-04 11:47:23.890561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:58.464 BaseBdev4 00:14:58.464 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.465 spare_malloc 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.465 spare_delay 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.465 [2024-11-04 11:47:23.958073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:58.465 [2024-11-04 11:47:23.958184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.465 [2024-11-04 11:47:23.958225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:58.465 [2024-11-04 11:47:23.958255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.465 [2024-11-04 11:47:23.960580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.465 [2024-11-04 11:47:23.960673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:58.465 spare 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.465 [2024-11-04 11:47:23.970091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.465 [2024-11-04 11:47:23.971947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.465 [2024-11-04 11:47:23.972073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.465 [2024-11-04 11:47:23.972164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:58.465 [2024-11-04 11:47:23.972291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:58.465 [2024-11-04 11:47:23.972337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:58.465 [2024-11-04 11:47:23.972652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:58.465 [2024-11-04 11:47:23.972884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:58.465 [2024-11-04 11:47:23.972932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:58.465 [2024-11-04 11:47:23.973111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.465 11:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.724 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.724 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.724 "name": "raid_bdev1", 00:14:58.724 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:14:58.724 "strip_size_kb": 0, 00:14:58.724 "state": "online", 00:14:58.725 "raid_level": "raid1", 00:14:58.725 "superblock": false, 00:14:58.725 "num_base_bdevs": 4, 00:14:58.725 "num_base_bdevs_discovered": 4, 00:14:58.725 "num_base_bdevs_operational": 4, 00:14:58.725 "base_bdevs_list": [ 00:14:58.725 { 00:14:58.725 "name": "BaseBdev1", 00:14:58.725 "uuid": "911d9d0d-a249-595f-a176-f1b474722f93", 00:14:58.725 "is_configured": true, 00:14:58.725 "data_offset": 0, 00:14:58.725 "data_size": 65536 00:14:58.725 }, 00:14:58.725 { 00:14:58.725 "name": "BaseBdev2", 00:14:58.725 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:14:58.725 "is_configured": true, 00:14:58.725 "data_offset": 0, 00:14:58.725 "data_size": 65536 00:14:58.725 }, 00:14:58.725 { 00:14:58.725 "name": "BaseBdev3", 00:14:58.725 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:14:58.725 "is_configured": true, 00:14:58.725 "data_offset": 0, 00:14:58.725 "data_size": 65536 00:14:58.725 }, 00:14:58.725 { 00:14:58.725 "name": "BaseBdev4", 00:14:58.725 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:14:58.725 "is_configured": true, 00:14:58.725 "data_offset": 0, 00:14:58.725 "data_size": 65536 00:14:58.725 } 00:14:58.725 ] 00:14:58.725 }' 00:14:58.725 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.725 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.984 [2024-11-04 11:47:24.413753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.984 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.244 [2024-11-04 11:47:24.509234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.244 "name": "raid_bdev1", 00:14:59.244 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:14:59.244 "strip_size_kb": 0, 00:14:59.244 "state": "online", 00:14:59.244 "raid_level": "raid1", 00:14:59.244 "superblock": false, 00:14:59.244 "num_base_bdevs": 4, 00:14:59.244 "num_base_bdevs_discovered": 3, 00:14:59.244 "num_base_bdevs_operational": 3, 00:14:59.244 "base_bdevs_list": [ 00:14:59.244 { 00:14:59.244 "name": null, 00:14:59.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.244 "is_configured": false, 00:14:59.244 "data_offset": 0, 00:14:59.244 "data_size": 65536 00:14:59.244 }, 00:14:59.244 { 00:14:59.244 "name": "BaseBdev2", 00:14:59.244 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:14:59.244 "is_configured": true, 00:14:59.244 "data_offset": 0, 00:14:59.244 "data_size": 65536 00:14:59.244 }, 00:14:59.244 { 00:14:59.244 "name": "BaseBdev3", 00:14:59.244 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:14:59.244 "is_configured": true, 00:14:59.244 "data_offset": 0, 00:14:59.244 "data_size": 65536 00:14:59.244 }, 00:14:59.244 { 00:14:59.244 "name": "BaseBdev4", 00:14:59.244 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:14:59.244 "is_configured": true, 00:14:59.244 "data_offset": 0, 00:14:59.244 "data_size": 65536 00:14:59.244 } 00:14:59.244 ] 00:14:59.244 }' 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.244 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.244 [2024-11-04 11:47:24.609540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:59.244 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:59.244 Zero copy mechanism will not be used. 00:14:59.244 Running I/O for 60 seconds... 00:14:59.504 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.504 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.504 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.504 [2024-11-04 11:47:24.923648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.504 11:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.504 11:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.504 [2024-11-04 11:47:24.981689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:59.504 [2024-11-04 11:47:24.983724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.761 [2024-11-04 11:47:25.104333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:59.761 [2024-11-04 11:47:25.105021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:00.020 [2024-11-04 11:47:25.313339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.020 [2024-11-04 11:47:25.313785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:00.278 137.00 IOPS, 411.00 MiB/s [2024-11-04T11:47:25.800Z] [2024-11-04 11:47:25.654354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:00.278 [2024-11-04 11:47:25.777853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.278 [2024-11-04 11:47:25.778781] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.538 11:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.538 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.538 "name": "raid_bdev1", 00:15:00.538 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:00.538 "strip_size_kb": 0, 00:15:00.538 "state": "online", 00:15:00.538 "raid_level": "raid1", 00:15:00.538 "superblock": false, 00:15:00.538 "num_base_bdevs": 4, 00:15:00.538 "num_base_bdevs_discovered": 4, 00:15:00.538 "num_base_bdevs_operational": 4, 00:15:00.538 "process": { 00:15:00.538 "type": "rebuild", 00:15:00.538 "target": "spare", 00:15:00.538 "progress": { 00:15:00.538 "blocks": 10240, 00:15:00.538 "percent": 15 00:15:00.538 } 00:15:00.538 }, 00:15:00.538 "base_bdevs_list": [ 00:15:00.538 { 00:15:00.538 "name": "spare", 00:15:00.538 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:00.538 "is_configured": true, 00:15:00.538 "data_offset": 0, 00:15:00.538 "data_size": 65536 00:15:00.538 }, 00:15:00.538 { 00:15:00.538 "name": "BaseBdev2", 00:15:00.538 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:15:00.538 "is_configured": true, 00:15:00.538 "data_offset": 0, 00:15:00.538 "data_size": 65536 00:15:00.538 }, 00:15:00.538 { 00:15:00.538 "name": "BaseBdev3", 00:15:00.538 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:00.538 "is_configured": true, 00:15:00.538 "data_offset": 0, 00:15:00.538 "data_size": 65536 00:15:00.538 }, 00:15:00.538 { 00:15:00.538 "name": "BaseBdev4", 00:15:00.538 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:00.538 "is_configured": true, 00:15:00.538 "data_offset": 0, 00:15:00.538 "data_size": 65536 00:15:00.538 } 00:15:00.538 ] 00:15:00.538 }' 00:15:00.538 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.797 [2024-11-04 11:47:26.125134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.797 [2024-11-04 11:47:26.126337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:00.797 [2024-11-04 11:47:26.242122] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.797 [2024-11-04 11:47:26.262236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.797 [2024-11-04 11:47:26.262430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.797 [2024-11-04 11:47:26.262468] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.797 [2024-11-04 11:47:26.295782] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.797 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.798 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.057 "name": "raid_bdev1", 00:15:01.057 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:01.057 "strip_size_kb": 0, 00:15:01.057 "state": "online", 00:15:01.057 "raid_level": "raid1", 00:15:01.057 "superblock": false, 00:15:01.057 "num_base_bdevs": 4, 00:15:01.057 "num_base_bdevs_discovered": 3, 00:15:01.057 "num_base_bdevs_operational": 3, 00:15:01.057 "base_bdevs_list": [ 00:15:01.057 { 00:15:01.057 "name": null, 00:15:01.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.057 "is_configured": false, 00:15:01.057 "data_offset": 0, 00:15:01.057 "data_size": 65536 00:15:01.057 }, 00:15:01.057 { 00:15:01.057 "name": "BaseBdev2", 00:15:01.057 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:15:01.057 "is_configured": true, 00:15:01.057 "data_offset": 0, 00:15:01.057 "data_size": 65536 00:15:01.057 }, 00:15:01.057 { 00:15:01.057 "name": "BaseBdev3", 00:15:01.057 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:01.057 "is_configured": true, 00:15:01.057 "data_offset": 0, 00:15:01.057 "data_size": 65536 00:15:01.057 }, 00:15:01.057 { 00:15:01.057 "name": "BaseBdev4", 00:15:01.057 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:01.057 "is_configured": true, 00:15:01.057 "data_offset": 0, 00:15:01.057 "data_size": 65536 00:15:01.057 } 00:15:01.057 ] 00:15:01.057 }' 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.057 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.315 128.00 IOPS, 384.00 MiB/s [2024-11-04T11:47:26.837Z] 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.315 "name": "raid_bdev1", 00:15:01.315 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:01.315 "strip_size_kb": 0, 00:15:01.315 "state": "online", 00:15:01.315 "raid_level": "raid1", 00:15:01.315 "superblock": false, 00:15:01.315 "num_base_bdevs": 4, 00:15:01.315 "num_base_bdevs_discovered": 3, 00:15:01.315 "num_base_bdevs_operational": 3, 00:15:01.315 "base_bdevs_list": [ 00:15:01.315 { 00:15:01.315 "name": null, 00:15:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.315 "is_configured": false, 00:15:01.315 "data_offset": 0, 00:15:01.315 "data_size": 65536 00:15:01.315 }, 00:15:01.315 { 00:15:01.315 "name": "BaseBdev2", 00:15:01.315 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:15:01.315 "is_configured": true, 00:15:01.315 "data_offset": 0, 00:15:01.315 "data_size": 65536 00:15:01.315 }, 00:15:01.315 { 00:15:01.315 "name": "BaseBdev3", 00:15:01.315 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:01.315 "is_configured": true, 00:15:01.315 "data_offset": 0, 00:15:01.315 "data_size": 65536 00:15:01.315 }, 00:15:01.315 { 00:15:01.315 "name": "BaseBdev4", 00:15:01.315 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:01.315 "is_configured": true, 00:15:01.315 "data_offset": 0, 00:15:01.315 "data_size": 65536 00:15:01.315 } 00:15:01.315 ] 00:15:01.315 }' 00:15:01.315 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.574 [2024-11-04 11:47:26.917905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.574 11:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:01.574 [2024-11-04 11:47:26.972526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:01.574 [2024-11-04 11:47:26.974515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.832 [2024-11-04 11:47:27.095143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.832 [2024-11-04 11:47:27.095763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:01.832 [2024-11-04 11:47:27.316098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:01.832 [2024-11-04 11:47:27.316612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:02.090 [2024-11-04 11:47:27.556959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:02.606 136.67 IOPS, 410.00 MiB/s [2024-11-04T11:47:28.128Z] 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.606 11:47:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.606 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.606 "name": "raid_bdev1", 00:15:02.606 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:02.606 "strip_size_kb": 0, 00:15:02.606 "state": "online", 00:15:02.606 "raid_level": "raid1", 00:15:02.606 "superblock": false, 00:15:02.606 "num_base_bdevs": 4, 00:15:02.606 "num_base_bdevs_discovered": 4, 00:15:02.606 "num_base_bdevs_operational": 4, 00:15:02.606 "process": { 00:15:02.606 "type": "rebuild", 00:15:02.606 "target": "spare", 00:15:02.606 "progress": { 00:15:02.606 "blocks": 14336, 00:15:02.606 "percent": 21 00:15:02.606 } 00:15:02.607 }, 00:15:02.607 "base_bdevs_list": [ 00:15:02.607 { 00:15:02.607 "name": "spare", 00:15:02.607 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:02.607 "is_configured": true, 00:15:02.607 "data_offset": 0, 00:15:02.607 "data_size": 65536 00:15:02.607 }, 00:15:02.607 { 00:15:02.607 "name": "BaseBdev2", 00:15:02.607 "uuid": "24197a9a-f94c-5f42-893a-0a6edb227e02", 00:15:02.607 "is_configured": true, 00:15:02.607 "data_offset": 0, 00:15:02.607 "data_size": 65536 00:15:02.607 }, 00:15:02.607 { 00:15:02.607 "name": "BaseBdev3", 00:15:02.607 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:02.607 "is_configured": true, 00:15:02.607 "data_offset": 0, 00:15:02.607 "data_size": 65536 00:15:02.607 }, 00:15:02.607 { 00:15:02.607 "name": "BaseBdev4", 00:15:02.607 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:02.607 "is_configured": true, 00:15:02.607 "data_offset": 0, 00:15:02.607 "data_size": 65536 00:15:02.607 } 00:15:02.607 ] 00:15:02.607 }' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.607 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.607 [2024-11-04 11:47:28.102264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.866 [2024-11-04 11:47:28.243556] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:02.866 [2024-11-04 11:47:28.243664] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.866 "name": "raid_bdev1", 00:15:02.866 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:02.866 "strip_size_kb": 0, 00:15:02.866 "state": "online", 00:15:02.866 "raid_level": "raid1", 00:15:02.866 "superblock": false, 00:15:02.866 "num_base_bdevs": 4, 00:15:02.866 "num_base_bdevs_discovered": 3, 00:15:02.866 "num_base_bdevs_operational": 3, 00:15:02.866 "process": { 00:15:02.866 "type": "rebuild", 00:15:02.866 "target": "spare", 00:15:02.866 "progress": { 00:15:02.866 "blocks": 18432, 00:15:02.866 "percent": 28 00:15:02.866 } 00:15:02.866 }, 00:15:02.866 "base_bdevs_list": [ 00:15:02.866 { 00:15:02.866 "name": "spare", 00:15:02.866 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:02.866 "is_configured": true, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 65536 00:15:02.866 }, 00:15:02.866 { 00:15:02.866 "name": null, 00:15:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.866 "is_configured": false, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 65536 00:15:02.866 }, 00:15:02.866 { 00:15:02.866 "name": "BaseBdev3", 00:15:02.866 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:02.866 "is_configured": true, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 65536 00:15:02.866 }, 00:15:02.866 { 00:15:02.866 "name": "BaseBdev4", 00:15:02.866 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:02.866 "is_configured": true, 00:15:02.866 "data_offset": 0, 00:15:02.866 "data_size": 65536 00:15:02.866 } 00:15:02.866 ] 00:15:02.866 }' 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.866 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.125 11:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.125 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.125 "name": "raid_bdev1", 00:15:03.125 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:03.125 "strip_size_kb": 0, 00:15:03.125 "state": "online", 00:15:03.125 "raid_level": "raid1", 00:15:03.125 "superblock": false, 00:15:03.125 "num_base_bdevs": 4, 00:15:03.125 "num_base_bdevs_discovered": 3, 00:15:03.125 "num_base_bdevs_operational": 3, 00:15:03.125 "process": { 00:15:03.125 "type": "rebuild", 00:15:03.125 "target": "spare", 00:15:03.125 "progress": { 00:15:03.125 "blocks": 20480, 00:15:03.125 "percent": 31 00:15:03.125 } 00:15:03.125 }, 00:15:03.125 "base_bdevs_list": [ 00:15:03.125 { 00:15:03.125 "name": "spare", 00:15:03.125 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:03.125 "is_configured": true, 00:15:03.125 "data_offset": 0, 00:15:03.125 "data_size": 65536 00:15:03.125 }, 00:15:03.125 { 00:15:03.125 "name": null, 00:15:03.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.125 "is_configured": false, 00:15:03.125 "data_offset": 0, 00:15:03.125 "data_size": 65536 00:15:03.125 }, 00:15:03.125 { 00:15:03.125 "name": "BaseBdev3", 00:15:03.125 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:03.125 "is_configured": true, 00:15:03.125 "data_offset": 0, 00:15:03.126 "data_size": 65536 00:15:03.126 }, 00:15:03.126 { 00:15:03.126 "name": "BaseBdev4", 00:15:03.126 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:03.126 "is_configured": true, 00:15:03.126 "data_offset": 0, 00:15:03.126 "data_size": 65536 00:15:03.126 } 00:15:03.126 ] 00:15:03.126 }' 00:15:03.126 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.126 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.126 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.126 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.126 11:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.384 123.50 IOPS, 370.50 MiB/s [2024-11-04T11:47:28.906Z] [2024-11-04 11:47:28.727418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:03.385 [2024-11-04 11:47:28.828621] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.385 [2024-11-04 11:47:28.828958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:03.953 [2024-11-04 11:47:29.241144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.224 [2024-11-04 11:47:29.584962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:04.224 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.224 "name": "raid_bdev1", 00:15:04.225 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:04.225 "strip_size_kb": 0, 00:15:04.225 "state": "online", 00:15:04.225 "raid_level": "raid1", 00:15:04.225 "superblock": false, 00:15:04.225 "num_base_bdevs": 4, 00:15:04.225 "num_base_bdevs_discovered": 3, 00:15:04.225 "num_base_bdevs_operational": 3, 00:15:04.225 "process": { 00:15:04.225 "type": "rebuild", 00:15:04.225 "target": "spare", 00:15:04.225 "progress": { 00:15:04.225 "blocks": 36864, 00:15:04.225 "percent": 56 00:15:04.225 } 00:15:04.225 }, 00:15:04.225 "base_bdevs_list": [ 00:15:04.225 { 00:15:04.225 "name": "spare", 00:15:04.225 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:04.225 "is_configured": true, 00:15:04.225 "data_offset": 0, 00:15:04.225 "data_size": 65536 00:15:04.225 }, 00:15:04.225 { 00:15:04.225 "name": null, 00:15:04.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.225 "is_configured": false, 00:15:04.225 "data_offset": 0, 00:15:04.225 "data_size": 65536 00:15:04.225 }, 00:15:04.225 { 00:15:04.225 "name": "BaseBdev3", 00:15:04.225 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:04.225 "is_configured": true, 00:15:04.225 "data_offset": 0, 00:15:04.225 "data_size": 65536 00:15:04.225 }, 00:15:04.225 { 00:15:04.225 "name": "BaseBdev4", 00:15:04.225 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:04.225 "is_configured": true, 00:15:04.225 "data_offset": 0, 00:15:04.225 "data_size": 65536 00:15:04.225 } 00:15:04.225 ] 00:15:04.225 }' 00:15:04.225 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.225 111.40 IOPS, 334.20 MiB/s [2024-11-04T11:47:29.747Z] 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.225 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.225 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.225 11:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.486 [2024-11-04 11:47:29.932584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:04.744 [2024-11-04 11:47:30.140374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:05.004 [2024-11-04 11:47:30.457107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:05.004 [2024-11-04 11:47:30.458371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:05.264 97.33 IOPS, 292.00 MiB/s [2024-11-04T11:47:30.786Z] [2024-11-04 11:47:30.682853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.264 "name": "raid_bdev1", 00:15:05.264 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:05.264 "strip_size_kb": 0, 00:15:05.264 "state": "online", 00:15:05.264 "raid_level": "raid1", 00:15:05.264 "superblock": false, 00:15:05.264 "num_base_bdevs": 4, 00:15:05.264 "num_base_bdevs_discovered": 3, 00:15:05.264 "num_base_bdevs_operational": 3, 00:15:05.264 "process": { 00:15:05.264 "type": "rebuild", 00:15:05.264 "target": "spare", 00:15:05.264 "progress": { 00:15:05.264 "blocks": 53248, 00:15:05.264 "percent": 81 00:15:05.264 } 00:15:05.264 }, 00:15:05.264 "base_bdevs_list": [ 00:15:05.264 { 00:15:05.264 "name": "spare", 00:15:05.264 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 0, 00:15:05.264 "data_size": 65536 00:15:05.264 }, 00:15:05.264 { 00:15:05.264 "name": null, 00:15:05.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.264 "is_configured": false, 00:15:05.264 "data_offset": 0, 00:15:05.264 "data_size": 65536 00:15:05.264 }, 00:15:05.264 { 00:15:05.264 "name": "BaseBdev3", 00:15:05.264 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 0, 00:15:05.264 "data_size": 65536 00:15:05.264 }, 00:15:05.264 { 00:15:05.264 "name": "BaseBdev4", 00:15:05.264 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:05.264 "is_configured": true, 00:15:05.264 "data_offset": 0, 00:15:05.264 "data_size": 65536 00:15:05.264 } 00:15:05.264 ] 00:15:05.264 }' 00:15:05.264 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.523 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.523 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.523 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.523 11:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.523 [2024-11-04 11:47:30.907308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:05.523 [2024-11-04 11:47:31.008892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:05.523 [2024-11-04 11:47:31.009305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:06.091 [2024-11-04 11:47:31.362418] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:06.091 [2024-11-04 11:47:31.468313] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:06.091 [2024-11-04 11:47:31.472837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.350 89.00 IOPS, 267.00 MiB/s [2024-11-04T11:47:31.872Z] 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.350 11:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.609 "name": "raid_bdev1", 00:15:06.609 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:06.609 "strip_size_kb": 0, 00:15:06.609 "state": "online", 00:15:06.609 "raid_level": "raid1", 00:15:06.609 "superblock": false, 00:15:06.609 "num_base_bdevs": 4, 00:15:06.609 "num_base_bdevs_discovered": 3, 00:15:06.609 "num_base_bdevs_operational": 3, 00:15:06.609 "base_bdevs_list": [ 00:15:06.609 { 00:15:06.609 "name": "spare", 00:15:06.609 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:06.609 "is_configured": true, 00:15:06.609 "data_offset": 0, 00:15:06.609 "data_size": 65536 00:15:06.609 }, 00:15:06.609 { 00:15:06.609 "name": null, 00:15:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.609 "is_configured": false, 00:15:06.609 "data_offset": 0, 00:15:06.609 "data_size": 65536 00:15:06.609 }, 00:15:06.609 { 00:15:06.609 "name": "BaseBdev3", 00:15:06.609 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:06.609 "is_configured": true, 00:15:06.609 "data_offset": 0, 00:15:06.609 "data_size": 65536 00:15:06.609 }, 00:15:06.609 { 00:15:06.609 "name": "BaseBdev4", 00:15:06.609 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:06.609 "is_configured": true, 00:15:06.609 "data_offset": 0, 00:15:06.609 "data_size": 65536 00:15:06.609 } 00:15:06.609 ] 00:15:06.609 }' 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.609 11:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.610 "name": "raid_bdev1", 00:15:06.610 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:06.610 "strip_size_kb": 0, 00:15:06.610 "state": "online", 00:15:06.610 "raid_level": "raid1", 00:15:06.610 "superblock": false, 00:15:06.610 "num_base_bdevs": 4, 00:15:06.610 "num_base_bdevs_discovered": 3, 00:15:06.610 "num_base_bdevs_operational": 3, 00:15:06.610 "base_bdevs_list": [ 00:15:06.610 { 00:15:06.610 "name": "spare", 00:15:06.610 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 0, 00:15:06.610 "data_size": 65536 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": null, 00:15:06.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.610 "is_configured": false, 00:15:06.610 "data_offset": 0, 00:15:06.610 "data_size": 65536 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": "BaseBdev3", 00:15:06.610 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 0, 00:15:06.610 "data_size": 65536 00:15:06.610 }, 00:15:06.610 { 00:15:06.610 "name": "BaseBdev4", 00:15:06.610 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:06.610 "is_configured": true, 00:15:06.610 "data_offset": 0, 00:15:06.610 "data_size": 65536 00:15:06.610 } 00:15:06.610 ] 00:15:06.610 }' 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.610 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.869 "name": "raid_bdev1", 00:15:06.869 "uuid": "f65d034e-cf08-4583-83ee-6b147a30f670", 00:15:06.869 "strip_size_kb": 0, 00:15:06.869 "state": "online", 00:15:06.869 "raid_level": "raid1", 00:15:06.869 "superblock": false, 00:15:06.869 "num_base_bdevs": 4, 00:15:06.869 "num_base_bdevs_discovered": 3, 00:15:06.869 "num_base_bdevs_operational": 3, 00:15:06.869 "base_bdevs_list": [ 00:15:06.869 { 00:15:06.869 "name": "spare", 00:15:06.869 "uuid": "ea3c7fa7-abc7-51bc-98e7-d897467762b8", 00:15:06.869 "is_configured": true, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 }, 00:15:06.869 { 00:15:06.869 "name": null, 00:15:06.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.869 "is_configured": false, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 }, 00:15:06.869 { 00:15:06.869 "name": "BaseBdev3", 00:15:06.869 "uuid": "720a92a1-d8f8-55f3-a7a0-6491e9f3d7e5", 00:15:06.869 "is_configured": true, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 }, 00:15:06.869 { 00:15:06.869 "name": "BaseBdev4", 00:15:06.869 "uuid": "9a797c47-8238-5eeb-99f9-4848732b8b63", 00:15:06.869 "is_configured": true, 00:15:06.869 "data_offset": 0, 00:15:06.869 "data_size": 65536 00:15:06.869 } 00:15:06.869 ] 00:15:06.869 }' 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.869 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.128 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.128 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 [2024-11-04 11:47:32.606939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.128 [2024-11-04 11:47:32.607025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.387 81.25 IOPS, 243.75 MiB/s 00:15:07.387 Latency(us) 00:15:07.387 [2024-11-04T11:47:32.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.387 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:07.387 raid_bdev1 : 8.08 80.71 242.12 0.00 0.00 17037.61 313.01 115847.04 00:15:07.387 [2024-11-04T11:47:32.909Z] =================================================================================================================== 00:15:07.387 [2024-11-04T11:47:32.909Z] Total : 80.71 242.12 0.00 0.00 17037.61 313.01 115847.04 00:15:07.387 { 00:15:07.387 "results": [ 00:15:07.387 { 00:15:07.387 "job": "raid_bdev1", 00:15:07.387 "core_mask": "0x1", 00:15:07.387 "workload": "randrw", 00:15:07.387 "percentage": 50, 00:15:07.387 "status": "finished", 00:15:07.387 "queue_depth": 2, 00:15:07.387 "io_size": 3145728, 00:15:07.387 "runtime": 8.078634, 00:15:07.387 "iops": 80.7067135359765, 00:15:07.387 "mibps": 242.1201406079295, 00:15:07.387 "io_failed": 0, 00:15:07.387 "io_timeout": 0, 00:15:07.387 "avg_latency_us": 17037.606193907894, 00:15:07.387 "min_latency_us": 313.0131004366812, 00:15:07.387 "max_latency_us": 115847.04279475982 00:15:07.387 } 00:15:07.387 ], 00:15:07.387 "core_count": 1 00:15:07.387 } 00:15:07.387 [2024-11-04 11:47:32.698477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.387 [2024-11-04 11:47:32.698537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.387 [2024-11-04 11:47:32.698648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.387 [2024-11-04 11:47:32.698663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.387 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:07.726 /dev/nbd0 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:07.726 11:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.726 1+0 records in 00:15:07.726 1+0 records out 00:15:07.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425216 s, 9.6 MB/s 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.726 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:07.983 /dev/nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.983 1+0 records in 00:15:07.983 1+0 records out 00:15:07.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524563 s, 7.8 MB/s 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.983 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.241 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:08.499 /dev/nbd1 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.499 1+0 records in 00:15:08.499 1+0 records out 00:15:08.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263447 s, 15.5 MB/s 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.499 11:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:08.757 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.015 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78992 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78992 ']' 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78992 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78992 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78992' 00:15:09.274 killing process with pid 78992 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78992 00:15:09.274 Received shutdown signal, test time was about 9.990063 seconds 00:15:09.274 00:15:09.274 Latency(us) 00:15:09.274 [2024-11-04T11:47:34.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.274 [2024-11-04T11:47:34.796Z] =================================================================================================================== 00:15:09.274 [2024-11-04T11:47:34.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.274 [2024-11-04 11:47:34.582590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.274 11:47:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78992 00:15:09.543 [2024-11-04 11:47:35.016093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:10.924 00:15:10.924 real 0m13.548s 00:15:10.924 user 0m17.187s 00:15:10.924 sys 0m1.789s 00:15:10.924 ************************************ 00:15:10.924 END TEST raid_rebuild_test_io 00:15:10.924 ************************************ 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.924 11:47:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:10.924 11:47:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:10.924 11:47:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:10.924 11:47:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.924 ************************************ 00:15:10.924 START TEST raid_rebuild_test_sb_io 00:15:10.924 ************************************ 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:10.924 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79401 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79401 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79401 ']' 00:15:10.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:10.925 11:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.925 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:10.925 Zero copy mechanism will not be used. 00:15:10.925 [2024-11-04 11:47:36.383162] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:15:10.925 [2024-11-04 11:47:36.383276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79401 ] 00:15:11.184 [2024-11-04 11:47:36.556745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.444 [2024-11-04 11:47:36.719702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.444 [2024-11-04 11:47:36.928512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.444 [2024-11-04 11:47:36.928580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.013 BaseBdev1_malloc 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.013 [2024-11-04 11:47:37.298738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:12.013 [2024-11-04 11:47:37.298863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.013 [2024-11-04 11:47:37.298907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:12.013 [2024-11-04 11:47:37.298939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.013 [2024-11-04 11:47:37.301260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.013 [2024-11-04 11:47:37.301360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.013 BaseBdev1 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.013 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.013 BaseBdev2_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.014 [2024-11-04 11:47:37.356900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:12.014 [2024-11-04 11:47:37.357049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.014 [2024-11-04 11:47:37.357090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:12.014 [2024-11-04 11:47:37.357144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.014 [2024-11-04 11:47:37.359233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.014 [2024-11-04 11:47:37.359268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:12.014 BaseBdev2 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.014 BaseBdev3_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.014 [2024-11-04 11:47:37.427053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:12.014 [2024-11-04 11:47:37.427171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.014 [2024-11-04 11:47:37.427211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:12.014 [2024-11-04 11:47:37.427242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.014 [2024-11-04 11:47:37.429476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.014 [2024-11-04 11:47:37.429558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:12.014 BaseBdev3 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.014 BaseBdev4_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.014 [2024-11-04 11:47:37.484389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:12.014 [2024-11-04 11:47:37.484519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.014 [2024-11-04 11:47:37.484561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:12.014 [2024-11-04 11:47:37.484598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.014 [2024-11-04 11:47:37.487046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.014 [2024-11-04 11:47:37.487134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:12.014 BaseBdev4 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.014 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.273 spare_malloc 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.273 spare_delay 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.273 [2024-11-04 11:47:37.553259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.273 [2024-11-04 11:47:37.553390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.273 [2024-11-04 11:47:37.553450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:12.273 [2024-11-04 11:47:37.553503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.273 [2024-11-04 11:47:37.555852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.273 [2024-11-04 11:47:37.555937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.273 spare 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.273 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.273 [2024-11-04 11:47:37.565293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.273 [2024-11-04 11:47:37.567290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.273 [2024-11-04 11:47:37.567456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.273 [2024-11-04 11:47:37.567578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.273 [2024-11-04 11:47:37.567827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:12.274 [2024-11-04 11:47:37.567885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:12.274 [2024-11-04 11:47:37.568251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:12.274 [2024-11-04 11:47:37.568532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:12.274 [2024-11-04 11:47:37.568586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:12.274 [2024-11-04 11:47:37.568827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.274 "name": "raid_bdev1", 00:15:12.274 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:12.274 "strip_size_kb": 0, 00:15:12.274 "state": "online", 00:15:12.274 "raid_level": "raid1", 00:15:12.274 "superblock": true, 00:15:12.274 "num_base_bdevs": 4, 00:15:12.274 "num_base_bdevs_discovered": 4, 00:15:12.274 "num_base_bdevs_operational": 4, 00:15:12.274 "base_bdevs_list": [ 00:15:12.274 { 00:15:12.274 "name": "BaseBdev1", 00:15:12.274 "uuid": "3708ad48-3901-5762-aaa8-759a6719054a", 00:15:12.274 "is_configured": true, 00:15:12.274 "data_offset": 2048, 00:15:12.274 "data_size": 63488 00:15:12.274 }, 00:15:12.274 { 00:15:12.274 "name": "BaseBdev2", 00:15:12.274 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:12.274 "is_configured": true, 00:15:12.274 "data_offset": 2048, 00:15:12.274 "data_size": 63488 00:15:12.274 }, 00:15:12.274 { 00:15:12.274 "name": "BaseBdev3", 00:15:12.274 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:12.274 "is_configured": true, 00:15:12.274 "data_offset": 2048, 00:15:12.274 "data_size": 63488 00:15:12.274 }, 00:15:12.274 { 00:15:12.274 "name": "BaseBdev4", 00:15:12.274 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:12.274 "is_configured": true, 00:15:12.274 "data_offset": 2048, 00:15:12.274 "data_size": 63488 00:15:12.274 } 00:15:12.274 ] 00:15:12.274 }' 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.274 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.533 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.534 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.534 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.534 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:12.534 [2024-11-04 11:47:37.984983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.534 11:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.534 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.794 [2024-11-04 11:47:38.088409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.794 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.794 "name": "raid_bdev1", 00:15:12.794 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:12.795 "strip_size_kb": 0, 00:15:12.795 "state": "online", 00:15:12.795 "raid_level": "raid1", 00:15:12.795 "superblock": true, 00:15:12.795 "num_base_bdevs": 4, 00:15:12.795 "num_base_bdevs_discovered": 3, 00:15:12.795 "num_base_bdevs_operational": 3, 00:15:12.795 "base_bdevs_list": [ 00:15:12.795 { 00:15:12.795 "name": null, 00:15:12.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.795 "is_configured": false, 00:15:12.795 "data_offset": 0, 00:15:12.795 "data_size": 63488 00:15:12.795 }, 00:15:12.795 { 00:15:12.795 "name": "BaseBdev2", 00:15:12.795 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:12.795 "is_configured": true, 00:15:12.795 "data_offset": 2048, 00:15:12.795 "data_size": 63488 00:15:12.795 }, 00:15:12.795 { 00:15:12.795 "name": "BaseBdev3", 00:15:12.795 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:12.795 "is_configured": true, 00:15:12.795 "data_offset": 2048, 00:15:12.795 "data_size": 63488 00:15:12.795 }, 00:15:12.795 { 00:15:12.795 "name": "BaseBdev4", 00:15:12.795 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:12.795 "is_configured": true, 00:15:12.795 "data_offset": 2048, 00:15:12.795 "data_size": 63488 00:15:12.795 } 00:15:12.795 ] 00:15:12.795 }' 00:15:12.795 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.795 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.795 [2024-11-04 11:47:38.193719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:12.795 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.795 Zero copy mechanism will not be used. 00:15:12.795 Running I/O for 60 seconds... 00:15:13.055 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.055 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.055 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.055 [2024-11-04 11:47:38.543445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.314 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.314 11:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:13.314 [2024-11-04 11:47:38.597427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:13.314 [2024-11-04 11:47:38.599601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.314 [2024-11-04 11:47:38.702661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.314 [2024-11-04 11:47:38.703430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:13.314 [2024-11-04 11:47:38.828376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.314 [2024-11-04 11:47:38.829332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:13.883 [2024-11-04 11:47:39.137766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.883 [2024-11-04 11:47:39.138497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:13.883 163.00 IOPS, 489.00 MiB/s [2024-11-04T11:47:39.405Z] [2024-11-04 11:47:39.272937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:14.142 [2024-11-04 11:47:39.519478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.142 "name": "raid_bdev1", 00:15:14.142 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:14.142 "strip_size_kb": 0, 00:15:14.142 "state": "online", 00:15:14.142 "raid_level": "raid1", 00:15:14.142 "superblock": true, 00:15:14.142 "num_base_bdevs": 4, 00:15:14.142 "num_base_bdevs_discovered": 4, 00:15:14.142 "num_base_bdevs_operational": 4, 00:15:14.142 "process": { 00:15:14.142 "type": "rebuild", 00:15:14.142 "target": "spare", 00:15:14.142 "progress": { 00:15:14.142 "blocks": 14336, 00:15:14.142 "percent": 22 00:15:14.142 } 00:15:14.142 }, 00:15:14.142 "base_bdevs_list": [ 00:15:14.142 { 00:15:14.142 "name": "spare", 00:15:14.142 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:14.142 "is_configured": true, 00:15:14.142 "data_offset": 2048, 00:15:14.142 "data_size": 63488 00:15:14.142 }, 00:15:14.142 { 00:15:14.142 "name": "BaseBdev2", 00:15:14.142 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:14.142 "is_configured": true, 00:15:14.142 "data_offset": 2048, 00:15:14.142 "data_size": 63488 00:15:14.142 }, 00:15:14.142 { 00:15:14.142 "name": "BaseBdev3", 00:15:14.142 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:14.142 "is_configured": true, 00:15:14.142 "data_offset": 2048, 00:15:14.142 "data_size": 63488 00:15:14.142 }, 00:15:14.142 { 00:15:14.142 "name": "BaseBdev4", 00:15:14.142 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:14.142 "is_configured": true, 00:15:14.142 "data_offset": 2048, 00:15:14.142 "data_size": 63488 00:15:14.142 } 00:15:14.142 ] 00:15:14.142 }' 00:15:14.142 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.400 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.401 [2024-11-04 11:47:39.722033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.401 [2024-11-04 11:47:39.737869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:14.401 [2024-11-04 11:47:39.738267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:14.401 [2024-11-04 11:47:39.841535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.401 [2024-11-04 11:47:39.853235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.401 [2024-11-04 11:47:39.853302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.401 [2024-11-04 11:47:39.853321] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.401 [2024-11-04 11:47:39.892386] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.401 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.659 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.659 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.659 "name": "raid_bdev1", 00:15:14.659 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:14.659 "strip_size_kb": 0, 00:15:14.659 "state": "online", 00:15:14.659 "raid_level": "raid1", 00:15:14.659 "superblock": true, 00:15:14.659 "num_base_bdevs": 4, 00:15:14.659 "num_base_bdevs_discovered": 3, 00:15:14.659 "num_base_bdevs_operational": 3, 00:15:14.659 "base_bdevs_list": [ 00:15:14.659 { 00:15:14.659 "name": null, 00:15:14.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.659 "is_configured": false, 00:15:14.659 "data_offset": 0, 00:15:14.659 "data_size": 63488 00:15:14.659 }, 00:15:14.659 { 00:15:14.659 "name": "BaseBdev2", 00:15:14.659 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:14.659 "is_configured": true, 00:15:14.659 "data_offset": 2048, 00:15:14.659 "data_size": 63488 00:15:14.659 }, 00:15:14.659 { 00:15:14.659 "name": "BaseBdev3", 00:15:14.659 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:14.659 "is_configured": true, 00:15:14.659 "data_offset": 2048, 00:15:14.659 "data_size": 63488 00:15:14.659 }, 00:15:14.659 { 00:15:14.659 "name": "BaseBdev4", 00:15:14.659 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:14.659 "is_configured": true, 00:15:14.659 "data_offset": 2048, 00:15:14.659 "data_size": 63488 00:15:14.659 } 00:15:14.659 ] 00:15:14.659 }' 00:15:14.659 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.659 11:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.918 137.00 IOPS, 411.00 MiB/s [2024-11-04T11:47:40.440Z] 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.918 "name": "raid_bdev1", 00:15:14.918 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:14.918 "strip_size_kb": 0, 00:15:14.918 "state": "online", 00:15:14.918 "raid_level": "raid1", 00:15:14.918 "superblock": true, 00:15:14.918 "num_base_bdevs": 4, 00:15:14.918 "num_base_bdevs_discovered": 3, 00:15:14.918 "num_base_bdevs_operational": 3, 00:15:14.918 "base_bdevs_list": [ 00:15:14.918 { 00:15:14.918 "name": null, 00:15:14.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.918 "is_configured": false, 00:15:14.918 "data_offset": 0, 00:15:14.918 "data_size": 63488 00:15:14.918 }, 00:15:14.918 { 00:15:14.918 "name": "BaseBdev2", 00:15:14.918 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:14.918 "is_configured": true, 00:15:14.918 "data_offset": 2048, 00:15:14.918 "data_size": 63488 00:15:14.918 }, 00:15:14.918 { 00:15:14.918 "name": "BaseBdev3", 00:15:14.918 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:14.918 "is_configured": true, 00:15:14.918 "data_offset": 2048, 00:15:14.918 "data_size": 63488 00:15:14.918 }, 00:15:14.918 { 00:15:14.918 "name": "BaseBdev4", 00:15:14.918 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:14.918 "is_configured": true, 00:15:14.918 "data_offset": 2048, 00:15:14.918 "data_size": 63488 00:15:14.918 } 00:15:14.918 ] 00:15:14.918 }' 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.918 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.177 [2024-11-04 11:47:40.493272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.177 11:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:15.177 [2024-11-04 11:47:40.550818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:15.177 [2024-11-04 11:47:40.553095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.177 [2024-11-04 11:47:40.680882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:15.435 [2024-11-04 11:47:40.918851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:15.435 [2024-11-04 11:47:40.919825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:16.002 146.33 IOPS, 439.00 MiB/s [2024-11-04T11:47:41.524Z] [2024-11-04 11:47:41.332663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:16.002 [2024-11-04 11:47:41.333388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.260 [2024-11-04 11:47:41.553195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.260 "name": "raid_bdev1", 00:15:16.260 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:16.260 "strip_size_kb": 0, 00:15:16.260 "state": "online", 00:15:16.260 "raid_level": "raid1", 00:15:16.260 "superblock": true, 00:15:16.260 "num_base_bdevs": 4, 00:15:16.260 "num_base_bdevs_discovered": 4, 00:15:16.260 "num_base_bdevs_operational": 4, 00:15:16.260 "process": { 00:15:16.260 "type": "rebuild", 00:15:16.260 "target": "spare", 00:15:16.260 "progress": { 00:15:16.260 "blocks": 8192, 00:15:16.260 "percent": 12 00:15:16.260 } 00:15:16.260 }, 00:15:16.260 "base_bdevs_list": [ 00:15:16.260 { 00:15:16.260 "name": "spare", 00:15:16.260 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev2", 00:15:16.260 "uuid": "79f53ec5-e13e-5c82-9560-abdb3ec84964", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev3", 00:15:16.260 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 }, 00:15:16.260 { 00:15:16.260 "name": "BaseBdev4", 00:15:16.260 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:16.260 "is_configured": true, 00:15:16.260 "data_offset": 2048, 00:15:16.260 "data_size": 63488 00:15:16.260 } 00:15:16.260 ] 00:15:16.260 }' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:16.260 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.260 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.260 [2024-11-04 11:47:41.693439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.518 [2024-11-04 11:47:41.920797] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:16.518 [2024-11-04 11:47:41.920935] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.518 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.518 "name": "raid_bdev1", 00:15:16.518 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:16.518 "strip_size_kb": 0, 00:15:16.518 "state": "online", 00:15:16.518 "raid_level": "raid1", 00:15:16.518 "superblock": true, 00:15:16.518 "num_base_bdevs": 4, 00:15:16.518 "num_base_bdevs_discovered": 3, 00:15:16.518 "num_base_bdevs_operational": 3, 00:15:16.518 "process": { 00:15:16.518 "type": "rebuild", 00:15:16.518 "target": "spare", 00:15:16.518 "progress": { 00:15:16.518 "blocks": 14336, 00:15:16.518 "percent": 22 00:15:16.518 } 00:15:16.518 }, 00:15:16.518 "base_bdevs_list": [ 00:15:16.518 { 00:15:16.518 "name": "spare", 00:15:16.519 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:16.519 "is_configured": true, 00:15:16.519 "data_offset": 2048, 00:15:16.519 "data_size": 63488 00:15:16.519 }, 00:15:16.519 { 00:15:16.519 "name": null, 00:15:16.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.519 "is_configured": false, 00:15:16.519 "data_offset": 0, 00:15:16.519 "data_size": 63488 00:15:16.519 }, 00:15:16.519 { 00:15:16.519 "name": "BaseBdev3", 00:15:16.519 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:16.519 "is_configured": true, 00:15:16.519 "data_offset": 2048, 00:15:16.519 "data_size": 63488 00:15:16.519 }, 00:15:16.519 { 00:15:16.519 "name": "BaseBdev4", 00:15:16.519 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:16.519 "is_configured": true, 00:15:16.519 "data_offset": 2048, 00:15:16.519 "data_size": 63488 00:15:16.519 } 00:15:16.519 ] 00:15:16.519 }' 00:15:16.519 11:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.519 [2024-11-04 11:47:42.027671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:16.519 [2024-11-04 11:47:42.028337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:16.519 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.519 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=504 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.779 "name": "raid_bdev1", 00:15:16.779 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:16.779 "strip_size_kb": 0, 00:15:16.779 "state": "online", 00:15:16.779 "raid_level": "raid1", 00:15:16.779 "superblock": true, 00:15:16.779 "num_base_bdevs": 4, 00:15:16.779 "num_base_bdevs_discovered": 3, 00:15:16.779 "num_base_bdevs_operational": 3, 00:15:16.779 "process": { 00:15:16.779 "type": "rebuild", 00:15:16.779 "target": "spare", 00:15:16.779 "progress": { 00:15:16.779 "blocks": 16384, 00:15:16.779 "percent": 25 00:15:16.779 } 00:15:16.779 }, 00:15:16.779 "base_bdevs_list": [ 00:15:16.779 { 00:15:16.779 "name": "spare", 00:15:16.779 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:16.779 "is_configured": true, 00:15:16.779 "data_offset": 2048, 00:15:16.779 "data_size": 63488 00:15:16.779 }, 00:15:16.779 { 00:15:16.779 "name": null, 00:15:16.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.779 "is_configured": false, 00:15:16.779 "data_offset": 0, 00:15:16.779 "data_size": 63488 00:15:16.779 }, 00:15:16.779 { 00:15:16.779 "name": "BaseBdev3", 00:15:16.779 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:16.779 "is_configured": true, 00:15:16.779 "data_offset": 2048, 00:15:16.779 "data_size": 63488 00:15:16.779 }, 00:15:16.779 { 00:15:16.779 "name": "BaseBdev4", 00:15:16.779 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:16.779 "is_configured": true, 00:15:16.779 "data_offset": 2048, 00:15:16.779 "data_size": 63488 00:15:16.779 } 00:15:16.779 ] 00:15:16.779 }' 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.779 132.75 IOPS, 398.25 MiB/s [2024-11-04T11:47:42.301Z] 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.779 11:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.038 [2024-11-04 11:47:42.457904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:17.606 [2024-11-04 11:47:43.108345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:17.865 118.60 IOPS, 355.80 MiB/s [2024-11-04T11:47:43.387Z] [2024-11-04 11:47:43.223269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:17.865 [2024-11-04 11:47:43.223623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.865 "name": "raid_bdev1", 00:15:17.865 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:17.865 "strip_size_kb": 0, 00:15:17.865 "state": "online", 00:15:17.865 "raid_level": "raid1", 00:15:17.865 "superblock": true, 00:15:17.865 "num_base_bdevs": 4, 00:15:17.865 "num_base_bdevs_discovered": 3, 00:15:17.865 "num_base_bdevs_operational": 3, 00:15:17.865 "process": { 00:15:17.865 "type": "rebuild", 00:15:17.865 "target": "spare", 00:15:17.865 "progress": { 00:15:17.865 "blocks": 34816, 00:15:17.865 "percent": 54 00:15:17.865 } 00:15:17.865 }, 00:15:17.865 "base_bdevs_list": [ 00:15:17.865 { 00:15:17.865 "name": "spare", 00:15:17.865 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:17.865 "is_configured": true, 00:15:17.865 "data_offset": 2048, 00:15:17.865 "data_size": 63488 00:15:17.865 }, 00:15:17.865 { 00:15:17.865 "name": null, 00:15:17.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.865 "is_configured": false, 00:15:17.865 "data_offset": 0, 00:15:17.865 "data_size": 63488 00:15:17.865 }, 00:15:17.865 { 00:15:17.865 "name": "BaseBdev3", 00:15:17.865 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:17.865 "is_configured": true, 00:15:17.865 "data_offset": 2048, 00:15:17.865 "data_size": 63488 00:15:17.865 }, 00:15:17.865 { 00:15:17.865 "name": "BaseBdev4", 00:15:17.865 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:17.865 "is_configured": true, 00:15:17.865 "data_offset": 2048, 00:15:17.865 "data_size": 63488 00:15:17.865 } 00:15:17.865 ] 00:15:17.865 }' 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.865 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.123 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.123 11:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.123 [2024-11-04 11:47:43.587248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:18.381 [2024-11-04 11:47:43.715687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:18.640 [2024-11-04 11:47:44.150349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:18.898 105.67 IOPS, 317.00 MiB/s [2024-11-04T11:47:44.420Z] 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.898 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.158 "name": "raid_bdev1", 00:15:19.158 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:19.158 "strip_size_kb": 0, 00:15:19.158 "state": "online", 00:15:19.158 "raid_level": "raid1", 00:15:19.158 "superblock": true, 00:15:19.158 "num_base_bdevs": 4, 00:15:19.158 "num_base_bdevs_discovered": 3, 00:15:19.158 "num_base_bdevs_operational": 3, 00:15:19.158 "process": { 00:15:19.158 "type": "rebuild", 00:15:19.158 "target": "spare", 00:15:19.158 "progress": { 00:15:19.158 "blocks": 49152, 00:15:19.158 "percent": 77 00:15:19.158 } 00:15:19.158 }, 00:15:19.158 "base_bdevs_list": [ 00:15:19.158 { 00:15:19.158 "name": "spare", 00:15:19.158 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:19.158 "is_configured": true, 00:15:19.158 "data_offset": 2048, 00:15:19.158 "data_size": 63488 00:15:19.158 }, 00:15:19.158 { 00:15:19.158 "name": null, 00:15:19.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.158 "is_configured": false, 00:15:19.158 "data_offset": 0, 00:15:19.158 "data_size": 63488 00:15:19.158 }, 00:15:19.158 { 00:15:19.158 "name": "BaseBdev3", 00:15:19.158 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:19.158 "is_configured": true, 00:15:19.158 "data_offset": 2048, 00:15:19.158 "data_size": 63488 00:15:19.158 }, 00:15:19.158 { 00:15:19.158 "name": "BaseBdev4", 00:15:19.158 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:19.158 "is_configured": true, 00:15:19.158 "data_offset": 2048, 00:15:19.158 "data_size": 63488 00:15:19.158 } 00:15:19.158 ] 00:15:19.158 }' 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.158 11:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.726 [2024-11-04 11:47:45.133809] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:19.726 95.71 IOPS, 287.14 MiB/s [2024-11-04T11:47:45.248Z] [2024-11-04 11:47:45.233582] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:19.726 [2024-11-04 11:47:45.235947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.294 "name": "raid_bdev1", 00:15:20.294 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:20.294 "strip_size_kb": 0, 00:15:20.294 "state": "online", 00:15:20.294 "raid_level": "raid1", 00:15:20.294 "superblock": true, 00:15:20.294 "num_base_bdevs": 4, 00:15:20.294 "num_base_bdevs_discovered": 3, 00:15:20.294 "num_base_bdevs_operational": 3, 00:15:20.294 "base_bdevs_list": [ 00:15:20.294 { 00:15:20.294 "name": "spare", 00:15:20.294 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": null, 00:15:20.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.294 "is_configured": false, 00:15:20.294 "data_offset": 0, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": "BaseBdev3", 00:15:20.294 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": "BaseBdev4", 00:15:20.294 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 } 00:15:20.294 ] 00:15:20.294 }' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.294 "name": "raid_bdev1", 00:15:20.294 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:20.294 "strip_size_kb": 0, 00:15:20.294 "state": "online", 00:15:20.294 "raid_level": "raid1", 00:15:20.294 "superblock": true, 00:15:20.294 "num_base_bdevs": 4, 00:15:20.294 "num_base_bdevs_discovered": 3, 00:15:20.294 "num_base_bdevs_operational": 3, 00:15:20.294 "base_bdevs_list": [ 00:15:20.294 { 00:15:20.294 "name": "spare", 00:15:20.294 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": null, 00:15:20.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.294 "is_configured": false, 00:15:20.294 "data_offset": 0, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": "BaseBdev3", 00:15:20.294 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 }, 00:15:20.294 { 00:15:20.294 "name": "BaseBdev4", 00:15:20.294 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:20.294 "is_configured": true, 00:15:20.294 "data_offset": 2048, 00:15:20.294 "data_size": 63488 00:15:20.294 } 00:15:20.294 ] 00:15:20.294 }' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.294 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.552 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.552 "name": "raid_bdev1", 00:15:20.552 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:20.552 "strip_size_kb": 0, 00:15:20.552 "state": "online", 00:15:20.552 "raid_level": "raid1", 00:15:20.552 "superblock": true, 00:15:20.552 "num_base_bdevs": 4, 00:15:20.552 "num_base_bdevs_discovered": 3, 00:15:20.552 "num_base_bdevs_operational": 3, 00:15:20.552 "base_bdevs_list": [ 00:15:20.552 { 00:15:20.552 "name": "spare", 00:15:20.552 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:20.552 "is_configured": true, 00:15:20.552 "data_offset": 2048, 00:15:20.552 "data_size": 63488 00:15:20.552 }, 00:15:20.552 { 00:15:20.552 "name": null, 00:15:20.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.552 "is_configured": false, 00:15:20.552 "data_offset": 0, 00:15:20.552 "data_size": 63488 00:15:20.552 }, 00:15:20.552 { 00:15:20.552 "name": "BaseBdev3", 00:15:20.553 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:20.553 "is_configured": true, 00:15:20.553 "data_offset": 2048, 00:15:20.553 "data_size": 63488 00:15:20.553 }, 00:15:20.553 { 00:15:20.553 "name": "BaseBdev4", 00:15:20.553 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:20.553 "is_configured": true, 00:15:20.553 "data_offset": 2048, 00:15:20.553 "data_size": 63488 00:15:20.553 } 00:15:20.553 ] 00:15:20.553 }' 00:15:20.553 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.553 11:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.852 88.88 IOPS, 266.62 MiB/s [2024-11-04T11:47:46.374Z] 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.852 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.852 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.852 [2024-11-04 11:47:46.232682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.852 [2024-11-04 11:47:46.232765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.852 00:15:20.852 Latency(us) 00:15:20.852 [2024-11-04T11:47:46.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.852 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:20.852 raid_bdev1 : 8.14 87.79 263.38 0.00 0.00 15384.80 341.63 119968.08 00:15:20.852 [2024-11-04T11:47:46.374Z] =================================================================================================================== 00:15:20.852 [2024-11-04T11:47:46.374Z] Total : 87.79 263.38 0.00 0.00 15384.80 341.63 119968.08 00:15:20.852 [2024-11-04 11:47:46.347844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.852 [2024-11-04 11:47:46.347971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.852 [2024-11-04 11:47:46.348121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.852 [2024-11-04 11:47:46.348135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:20.852 { 00:15:20.852 "results": [ 00:15:20.852 { 00:15:20.852 "job": "raid_bdev1", 00:15:20.852 "core_mask": "0x1", 00:15:20.852 "workload": "randrw", 00:15:20.852 "percentage": 50, 00:15:20.852 "status": "finished", 00:15:20.852 "queue_depth": 2, 00:15:20.852 "io_size": 3145728, 00:15:20.852 "runtime": 8.144204, 00:15:20.852 "iops": 87.79249635691836, 00:15:20.852 "mibps": 263.3774890707551, 00:15:20.852 "io_failed": 0, 00:15:20.852 "io_timeout": 0, 00:15:20.852 "avg_latency_us": 15384.802144929308, 00:15:20.852 "min_latency_us": 341.63144104803496, 00:15:20.852 "max_latency_us": 119968.08384279476 00:15:20.852 } 00:15:20.852 ], 00:15:20.852 "core_count": 1 00:15:20.852 } 00:15:20.852 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.852 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.852 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.853 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.111 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.112 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:21.371 /dev/nbd0 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.371 1+0 records in 00:15:21.371 1+0 records out 00:15:21.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320832 s, 12.8 MB/s 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.371 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:21.630 /dev/nbd1 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.630 1+0 records in 00:15:21.630 1+0 records out 00:15:21.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351961 s, 11.6 MB/s 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.630 11:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.630 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:22.198 /dev/nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.198 1+0 records in 00:15:22.198 1+0 records out 00:15:22.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317256 s, 12.9 MB/s 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.198 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.456 11:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.715 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.975 [2024-11-04 11:47:48.309572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.975 [2024-11-04 11:47:48.309631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.975 [2024-11-04 11:47:48.309651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:22.975 [2024-11-04 11:47:48.309668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.975 [2024-11-04 11:47:48.311943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.975 [2024-11-04 11:47:48.311982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.975 [2024-11-04 11:47:48.312080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:22.975 [2024-11-04 11:47:48.312171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.975 [2024-11-04 11:47:48.312365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.975 [2024-11-04 11:47:48.312489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:22.975 spare 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.975 [2024-11-04 11:47:48.412421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:22.975 [2024-11-04 11:47:48.412471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:22.975 [2024-11-04 11:47:48.412830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:22.975 [2024-11-04 11:47:48.413064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:22.975 [2024-11-04 11:47:48.413093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:22.975 [2024-11-04 11:47:48.413286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.975 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.976 "name": "raid_bdev1", 00:15:22.976 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:22.976 "strip_size_kb": 0, 00:15:22.976 "state": "online", 00:15:22.976 "raid_level": "raid1", 00:15:22.976 "superblock": true, 00:15:22.976 "num_base_bdevs": 4, 00:15:22.976 "num_base_bdevs_discovered": 3, 00:15:22.976 "num_base_bdevs_operational": 3, 00:15:22.976 "base_bdevs_list": [ 00:15:22.976 { 00:15:22.976 "name": "spare", 00:15:22.976 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:22.976 "is_configured": true, 00:15:22.976 "data_offset": 2048, 00:15:22.976 "data_size": 63488 00:15:22.976 }, 00:15:22.976 { 00:15:22.976 "name": null, 00:15:22.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.976 "is_configured": false, 00:15:22.976 "data_offset": 2048, 00:15:22.976 "data_size": 63488 00:15:22.976 }, 00:15:22.976 { 00:15:22.976 "name": "BaseBdev3", 00:15:22.976 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:22.976 "is_configured": true, 00:15:22.976 "data_offset": 2048, 00:15:22.976 "data_size": 63488 00:15:22.976 }, 00:15:22.976 { 00:15:22.976 "name": "BaseBdev4", 00:15:22.976 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:22.976 "is_configured": true, 00:15:22.976 "data_offset": 2048, 00:15:22.976 "data_size": 63488 00:15:22.976 } 00:15:22.976 ] 00:15:22.976 }' 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.976 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.544 "name": "raid_bdev1", 00:15:23.544 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:23.544 "strip_size_kb": 0, 00:15:23.544 "state": "online", 00:15:23.544 "raid_level": "raid1", 00:15:23.544 "superblock": true, 00:15:23.544 "num_base_bdevs": 4, 00:15:23.544 "num_base_bdevs_discovered": 3, 00:15:23.544 "num_base_bdevs_operational": 3, 00:15:23.544 "base_bdevs_list": [ 00:15:23.544 { 00:15:23.544 "name": "spare", 00:15:23.544 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:23.544 "is_configured": true, 00:15:23.544 "data_offset": 2048, 00:15:23.544 "data_size": 63488 00:15:23.544 }, 00:15:23.544 { 00:15:23.544 "name": null, 00:15:23.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.544 "is_configured": false, 00:15:23.544 "data_offset": 2048, 00:15:23.544 "data_size": 63488 00:15:23.544 }, 00:15:23.544 { 00:15:23.544 "name": "BaseBdev3", 00:15:23.544 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:23.544 "is_configured": true, 00:15:23.544 "data_offset": 2048, 00:15:23.544 "data_size": 63488 00:15:23.544 }, 00:15:23.544 { 00:15:23.544 "name": "BaseBdev4", 00:15:23.544 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:23.544 "is_configured": true, 00:15:23.544 "data_offset": 2048, 00:15:23.544 "data_size": 63488 00:15:23.544 } 00:15:23.544 ] 00:15:23.544 }' 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.544 11:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.544 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 [2024-11-04 11:47:49.060678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.802 "name": "raid_bdev1", 00:15:23.802 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:23.802 "strip_size_kb": 0, 00:15:23.802 "state": "online", 00:15:23.802 "raid_level": "raid1", 00:15:23.802 "superblock": true, 00:15:23.802 "num_base_bdevs": 4, 00:15:23.802 "num_base_bdevs_discovered": 2, 00:15:23.802 "num_base_bdevs_operational": 2, 00:15:23.802 "base_bdevs_list": [ 00:15:23.802 { 00:15:23.802 "name": null, 00:15:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.802 "is_configured": false, 00:15:23.802 "data_offset": 0, 00:15:23.802 "data_size": 63488 00:15:23.802 }, 00:15:23.802 { 00:15:23.802 "name": null, 00:15:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.802 "is_configured": false, 00:15:23.802 "data_offset": 2048, 00:15:23.802 "data_size": 63488 00:15:23.802 }, 00:15:23.802 { 00:15:23.802 "name": "BaseBdev3", 00:15:23.802 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:23.802 "is_configured": true, 00:15:23.802 "data_offset": 2048, 00:15:23.802 "data_size": 63488 00:15:23.802 }, 00:15:23.802 { 00:15:23.802 "name": "BaseBdev4", 00:15:23.802 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:23.802 "is_configured": true, 00:15:23.802 "data_offset": 2048, 00:15:23.802 "data_size": 63488 00:15:23.802 } 00:15:23.802 ] 00:15:23.802 }' 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.802 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.061 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.061 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.061 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.061 [2024-11-04 11:47:49.476047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.061 [2024-11-04 11:47:49.476273] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.061 [2024-11-04 11:47:49.476296] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.061 [2024-11-04 11:47:49.476334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.061 [2024-11-04 11:47:49.491355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:24.061 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.061 11:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.061 [2024-11-04 11:47:49.493297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.008 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.266 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.266 "name": "raid_bdev1", 00:15:25.266 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:25.266 "strip_size_kb": 0, 00:15:25.266 "state": "online", 00:15:25.266 "raid_level": "raid1", 00:15:25.266 "superblock": true, 00:15:25.266 "num_base_bdevs": 4, 00:15:25.266 "num_base_bdevs_discovered": 3, 00:15:25.266 "num_base_bdevs_operational": 3, 00:15:25.266 "process": { 00:15:25.266 "type": "rebuild", 00:15:25.266 "target": "spare", 00:15:25.266 "progress": { 00:15:25.266 "blocks": 20480, 00:15:25.266 "percent": 32 00:15:25.266 } 00:15:25.267 }, 00:15:25.267 "base_bdevs_list": [ 00:15:25.267 { 00:15:25.267 "name": "spare", 00:15:25.267 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:25.267 "is_configured": true, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": null, 00:15:25.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.267 "is_configured": false, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": "BaseBdev3", 00:15:25.267 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:25.267 "is_configured": true, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": "BaseBdev4", 00:15:25.267 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:25.267 "is_configured": true, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 } 00:15:25.267 ] 00:15:25.267 }' 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.267 [2024-11-04 11:47:50.644887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.267 [2024-11-04 11:47:50.699188] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.267 [2024-11-04 11:47:50.699268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.267 [2024-11-04 11:47:50.699306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.267 [2024-11-04 11:47:50.699314] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.267 "name": "raid_bdev1", 00:15:25.267 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:25.267 "strip_size_kb": 0, 00:15:25.267 "state": "online", 00:15:25.267 "raid_level": "raid1", 00:15:25.267 "superblock": true, 00:15:25.267 "num_base_bdevs": 4, 00:15:25.267 "num_base_bdevs_discovered": 2, 00:15:25.267 "num_base_bdevs_operational": 2, 00:15:25.267 "base_bdevs_list": [ 00:15:25.267 { 00:15:25.267 "name": null, 00:15:25.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.267 "is_configured": false, 00:15:25.267 "data_offset": 0, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": null, 00:15:25.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.267 "is_configured": false, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": "BaseBdev3", 00:15:25.267 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:25.267 "is_configured": true, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 }, 00:15:25.267 { 00:15:25.267 "name": "BaseBdev4", 00:15:25.267 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:25.267 "is_configured": true, 00:15:25.267 "data_offset": 2048, 00:15:25.267 "data_size": 63488 00:15:25.267 } 00:15:25.267 ] 00:15:25.267 }' 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.267 11:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.834 11:47:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:25.834 11:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.834 11:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.834 [2024-11-04 11:47:51.186019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:25.834 [2024-11-04 11:47:51.186100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.834 [2024-11-04 11:47:51.186129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:25.834 [2024-11-04 11:47:51.186139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.834 [2024-11-04 11:47:51.186697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.834 [2024-11-04 11:47:51.186729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:25.834 [2024-11-04 11:47:51.186851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:25.834 [2024-11-04 11:47:51.186873] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:25.834 [2024-11-04 11:47:51.186890] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:25.834 [2024-11-04 11:47:51.186914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:25.834 [2024-11-04 11:47:51.203178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:25.834 spare 00:15:25.834 11:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.834 [2024-11-04 11:47:51.205230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.834 11:47:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.797 "name": "raid_bdev1", 00:15:26.797 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:26.797 "strip_size_kb": 0, 00:15:26.797 "state": "online", 00:15:26.797 "raid_level": "raid1", 00:15:26.797 "superblock": true, 00:15:26.797 "num_base_bdevs": 4, 00:15:26.797 "num_base_bdevs_discovered": 3, 00:15:26.797 "num_base_bdevs_operational": 3, 00:15:26.797 "process": { 00:15:26.797 "type": "rebuild", 00:15:26.797 "target": "spare", 00:15:26.797 "progress": { 00:15:26.797 "blocks": 20480, 00:15:26.797 "percent": 32 00:15:26.797 } 00:15:26.797 }, 00:15:26.797 "base_bdevs_list": [ 00:15:26.797 { 00:15:26.797 "name": "spare", 00:15:26.797 "uuid": "a6cc8efd-94c3-5822-9d39-1cd3bfbcf473", 00:15:26.797 "is_configured": true, 00:15:26.797 "data_offset": 2048, 00:15:26.797 "data_size": 63488 00:15:26.797 }, 00:15:26.797 { 00:15:26.797 "name": null, 00:15:26.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.797 "is_configured": false, 00:15:26.797 "data_offset": 2048, 00:15:26.797 "data_size": 63488 00:15:26.797 }, 00:15:26.797 { 00:15:26.797 "name": "BaseBdev3", 00:15:26.797 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:26.797 "is_configured": true, 00:15:26.797 "data_offset": 2048, 00:15:26.797 "data_size": 63488 00:15:26.797 }, 00:15:26.797 { 00:15:26.797 "name": "BaseBdev4", 00:15:26.797 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:26.797 "is_configured": true, 00:15:26.797 "data_offset": 2048, 00:15:26.797 "data_size": 63488 00:15:26.797 } 00:15:26.797 ] 00:15:26.797 }' 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.797 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.056 [2024-11-04 11:47:52.372872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.056 [2024-11-04 11:47:52.411142] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.056 [2024-11-04 11:47:52.411299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.056 [2024-11-04 11:47:52.411323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.056 [2024-11-04 11:47:52.411334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.056 "name": "raid_bdev1", 00:15:27.056 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:27.056 "strip_size_kb": 0, 00:15:27.056 "state": "online", 00:15:27.056 "raid_level": "raid1", 00:15:27.056 "superblock": true, 00:15:27.056 "num_base_bdevs": 4, 00:15:27.056 "num_base_bdevs_discovered": 2, 00:15:27.056 "num_base_bdevs_operational": 2, 00:15:27.056 "base_bdevs_list": [ 00:15:27.056 { 00:15:27.056 "name": null, 00:15:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.056 "is_configured": false, 00:15:27.056 "data_offset": 0, 00:15:27.056 "data_size": 63488 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "name": null, 00:15:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.056 "is_configured": false, 00:15:27.056 "data_offset": 2048, 00:15:27.056 "data_size": 63488 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "name": "BaseBdev3", 00:15:27.056 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:27.056 "is_configured": true, 00:15:27.056 "data_offset": 2048, 00:15:27.056 "data_size": 63488 00:15:27.056 }, 00:15:27.056 { 00:15:27.056 "name": "BaseBdev4", 00:15:27.056 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:27.056 "is_configured": true, 00:15:27.056 "data_offset": 2048, 00:15:27.056 "data_size": 63488 00:15:27.056 } 00:15:27.056 ] 00:15:27.056 }' 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.056 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.625 "name": "raid_bdev1", 00:15:27.625 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:27.625 "strip_size_kb": 0, 00:15:27.625 "state": "online", 00:15:27.625 "raid_level": "raid1", 00:15:27.625 "superblock": true, 00:15:27.625 "num_base_bdevs": 4, 00:15:27.625 "num_base_bdevs_discovered": 2, 00:15:27.625 "num_base_bdevs_operational": 2, 00:15:27.625 "base_bdevs_list": [ 00:15:27.625 { 00:15:27.625 "name": null, 00:15:27.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.625 "is_configured": false, 00:15:27.625 "data_offset": 0, 00:15:27.625 "data_size": 63488 00:15:27.625 }, 00:15:27.625 { 00:15:27.625 "name": null, 00:15:27.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.625 "is_configured": false, 00:15:27.625 "data_offset": 2048, 00:15:27.625 "data_size": 63488 00:15:27.625 }, 00:15:27.625 { 00:15:27.625 "name": "BaseBdev3", 00:15:27.625 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:27.625 "is_configured": true, 00:15:27.625 "data_offset": 2048, 00:15:27.625 "data_size": 63488 00:15:27.625 }, 00:15:27.625 { 00:15:27.625 "name": "BaseBdev4", 00:15:27.625 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:27.625 "is_configured": true, 00:15:27.625 "data_offset": 2048, 00:15:27.625 "data_size": 63488 00:15:27.625 } 00:15:27.625 ] 00:15:27.625 }' 00:15:27.625 11:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 [2024-11-04 11:47:53.085082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.625 [2024-11-04 11:47:53.085164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.625 [2024-11-04 11:47:53.085190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:27.625 [2024-11-04 11:47:53.085202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.625 [2024-11-04 11:47:53.085707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.625 [2024-11-04 11:47:53.085736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.625 [2024-11-04 11:47:53.085828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:27.625 [2024-11-04 11:47:53.085849] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:27.625 [2024-11-04 11:47:53.085858] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.625 [2024-11-04 11:47:53.085870] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:27.625 BaseBdev1 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.625 11:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.002 "name": "raid_bdev1", 00:15:29.002 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:29.002 "strip_size_kb": 0, 00:15:29.002 "state": "online", 00:15:29.002 "raid_level": "raid1", 00:15:29.002 "superblock": true, 00:15:29.002 "num_base_bdevs": 4, 00:15:29.002 "num_base_bdevs_discovered": 2, 00:15:29.002 "num_base_bdevs_operational": 2, 00:15:29.002 "base_bdevs_list": [ 00:15:29.002 { 00:15:29.002 "name": null, 00:15:29.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.002 "is_configured": false, 00:15:29.002 "data_offset": 0, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": null, 00:15:29.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.002 "is_configured": false, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": "BaseBdev3", 00:15:29.002 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 }, 00:15:29.002 { 00:15:29.002 "name": "BaseBdev4", 00:15:29.002 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:29.002 "is_configured": true, 00:15:29.002 "data_offset": 2048, 00:15:29.002 "data_size": 63488 00:15:29.002 } 00:15:29.002 ] 00:15:29.002 }' 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.002 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.260 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.260 "name": "raid_bdev1", 00:15:29.260 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:29.260 "strip_size_kb": 0, 00:15:29.260 "state": "online", 00:15:29.260 "raid_level": "raid1", 00:15:29.260 "superblock": true, 00:15:29.260 "num_base_bdevs": 4, 00:15:29.260 "num_base_bdevs_discovered": 2, 00:15:29.260 "num_base_bdevs_operational": 2, 00:15:29.260 "base_bdevs_list": [ 00:15:29.260 { 00:15:29.260 "name": null, 00:15:29.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.260 "is_configured": false, 00:15:29.260 "data_offset": 0, 00:15:29.260 "data_size": 63488 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "name": null, 00:15:29.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.260 "is_configured": false, 00:15:29.260 "data_offset": 2048, 00:15:29.260 "data_size": 63488 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "name": "BaseBdev3", 00:15:29.260 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:29.260 "is_configured": true, 00:15:29.261 "data_offset": 2048, 00:15:29.261 "data_size": 63488 00:15:29.261 }, 00:15:29.261 { 00:15:29.261 "name": "BaseBdev4", 00:15:29.261 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:29.261 "is_configured": true, 00:15:29.261 "data_offset": 2048, 00:15:29.261 "data_size": 63488 00:15:29.261 } 00:15:29.261 ] 00:15:29.261 }' 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.261 [2024-11-04 11:47:54.702714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.261 [2024-11-04 11:47:54.702896] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:29.261 [2024-11-04 11:47:54.702910] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.261 request: 00:15:29.261 { 00:15:29.261 "base_bdev": "BaseBdev1", 00:15:29.261 "raid_bdev": "raid_bdev1", 00:15:29.261 "method": "bdev_raid_add_base_bdev", 00:15:29.261 "req_id": 1 00:15:29.261 } 00:15:29.261 Got JSON-RPC error response 00:15:29.261 response: 00:15:29.261 { 00:15:29.261 "code": -22, 00:15:29.261 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.261 } 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.261 11:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.195 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.195 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.195 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.454 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.454 "name": "raid_bdev1", 00:15:30.454 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:30.454 "strip_size_kb": 0, 00:15:30.454 "state": "online", 00:15:30.454 "raid_level": "raid1", 00:15:30.454 "superblock": true, 00:15:30.454 "num_base_bdevs": 4, 00:15:30.454 "num_base_bdevs_discovered": 2, 00:15:30.454 "num_base_bdevs_operational": 2, 00:15:30.454 "base_bdevs_list": [ 00:15:30.454 { 00:15:30.454 "name": null, 00:15:30.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.454 "is_configured": false, 00:15:30.454 "data_offset": 0, 00:15:30.454 "data_size": 63488 00:15:30.454 }, 00:15:30.454 { 00:15:30.454 "name": null, 00:15:30.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.454 "is_configured": false, 00:15:30.454 "data_offset": 2048, 00:15:30.454 "data_size": 63488 00:15:30.454 }, 00:15:30.454 { 00:15:30.454 "name": "BaseBdev3", 00:15:30.454 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:30.454 "is_configured": true, 00:15:30.454 "data_offset": 2048, 00:15:30.454 "data_size": 63488 00:15:30.454 }, 00:15:30.454 { 00:15:30.454 "name": "BaseBdev4", 00:15:30.454 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:30.454 "is_configured": true, 00:15:30.454 "data_offset": 2048, 00:15:30.454 "data_size": 63488 00:15:30.454 } 00:15:30.455 ] 00:15:30.455 }' 00:15:30.455 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.455 11:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.713 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.714 "name": "raid_bdev1", 00:15:30.714 "uuid": "fbf0f4e2-7507-40ce-be80-7abcccff8bb6", 00:15:30.714 "strip_size_kb": 0, 00:15:30.714 "state": "online", 00:15:30.714 "raid_level": "raid1", 00:15:30.714 "superblock": true, 00:15:30.714 "num_base_bdevs": 4, 00:15:30.714 "num_base_bdevs_discovered": 2, 00:15:30.714 "num_base_bdevs_operational": 2, 00:15:30.714 "base_bdevs_list": [ 00:15:30.714 { 00:15:30.714 "name": null, 00:15:30.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.714 "is_configured": false, 00:15:30.714 "data_offset": 0, 00:15:30.714 "data_size": 63488 00:15:30.714 }, 00:15:30.714 { 00:15:30.714 "name": null, 00:15:30.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.714 "is_configured": false, 00:15:30.714 "data_offset": 2048, 00:15:30.714 "data_size": 63488 00:15:30.714 }, 00:15:30.714 { 00:15:30.714 "name": "BaseBdev3", 00:15:30.714 "uuid": "a758f4ae-7660-5078-9394-1dcf3a479223", 00:15:30.714 "is_configured": true, 00:15:30.714 "data_offset": 2048, 00:15:30.714 "data_size": 63488 00:15:30.714 }, 00:15:30.714 { 00:15:30.714 "name": "BaseBdev4", 00:15:30.714 "uuid": "11d55fab-8011-5ec3-ac64-8736cdfc33fb", 00:15:30.714 "is_configured": true, 00:15:30.714 "data_offset": 2048, 00:15:30.714 "data_size": 63488 00:15:30.714 } 00:15:30.714 ] 00:15:30.714 }' 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.714 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79401 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79401 ']' 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79401 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:15:30.973 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79401 00:15:30.974 killing process with pid 79401 00:15:30.974 Received shutdown signal, test time was about 18.140278 seconds 00:15:30.974 00:15:30.974 Latency(us) 00:15:30.974 [2024-11-04T11:47:56.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.974 [2024-11-04T11:47:56.496Z] =================================================================================================================== 00:15:30.974 [2024-11-04T11:47:56.496Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79401' 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79401 00:15:30.974 [2024-11-04 11:47:56.301306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.974 [2024-11-04 11:47:56.301448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.974 11:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79401 00:15:30.974 [2024-11-04 11:47:56.301522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.974 [2024-11-04 11:47:56.301531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:31.232 [2024-11-04 11:47:56.743910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.632 11:47:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.632 00:15:32.632 real 0m21.681s 00:15:32.633 user 0m28.408s 00:15:32.633 sys 0m2.605s 00:15:32.633 11:47:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.633 11:47:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.633 ************************************ 00:15:32.633 END TEST raid_rebuild_test_sb_io 00:15:32.633 ************************************ 00:15:32.633 11:47:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:32.633 11:47:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:32.633 11:47:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:32.633 11:47:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.633 11:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.633 ************************************ 00:15:32.633 START TEST raid5f_state_function_test 00:15:32.633 ************************************ 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80129 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80129' 00:15:32.633 Process raid pid: 80129 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80129 00:15:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80129 ']' 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.633 11:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.633 [2024-11-04 11:47:58.142697] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:15:32.633 [2024-11-04 11:47:58.142835] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.893 [2024-11-04 11:47:58.324019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.152 [2024-11-04 11:47:58.448750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.411 [2024-11-04 11:47:58.676669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.411 [2024-11-04 11:47:58.676711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.669 [2024-11-04 11:47:59.010052] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.669 [2024-11-04 11:47:59.010178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.669 [2024-11-04 11:47:59.010233] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.669 [2024-11-04 11:47:59.010263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.669 [2024-11-04 11:47:59.010307] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.669 [2024-11-04 11:47:59.010333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.669 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.670 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.670 "name": "Existed_Raid", 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.670 "strip_size_kb": 64, 00:15:33.670 "state": "configuring", 00:15:33.670 "raid_level": "raid5f", 00:15:33.670 "superblock": false, 00:15:33.670 "num_base_bdevs": 3, 00:15:33.670 "num_base_bdevs_discovered": 0, 00:15:33.670 "num_base_bdevs_operational": 3, 00:15:33.670 "base_bdevs_list": [ 00:15:33.670 { 00:15:33.670 "name": "BaseBdev1", 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 0, 00:15:33.670 "data_size": 0 00:15:33.670 }, 00:15:33.670 { 00:15:33.670 "name": "BaseBdev2", 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 0, 00:15:33.670 "data_size": 0 00:15:33.670 }, 00:15:33.670 { 00:15:33.670 "name": "BaseBdev3", 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 0, 00:15:33.670 "data_size": 0 00:15:33.670 } 00:15:33.670 ] 00:15:33.670 }' 00:15:33.670 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.670 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.243 [2024-11-04 11:47:59.469238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.243 [2024-11-04 11:47:59.469323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.243 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.244 [2024-11-04 11:47:59.477223] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.244 [2024-11-04 11:47:59.477318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.244 [2024-11-04 11:47:59.477367] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.244 [2024-11-04 11:47:59.477426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.244 [2024-11-04 11:47:59.477466] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.244 [2024-11-04 11:47:59.477508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.244 [2024-11-04 11:47:59.522272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.244 BaseBdev1 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.244 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.245 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.245 [ 00:15:34.245 { 00:15:34.245 "name": "BaseBdev1", 00:15:34.245 "aliases": [ 00:15:34.245 "e7ee7448-eca5-4d00-b039-57b1bb8afa0d" 00:15:34.245 ], 00:15:34.245 "product_name": "Malloc disk", 00:15:34.245 "block_size": 512, 00:15:34.245 "num_blocks": 65536, 00:15:34.245 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:34.245 "assigned_rate_limits": { 00:15:34.245 "rw_ios_per_sec": 0, 00:15:34.245 "rw_mbytes_per_sec": 0, 00:15:34.245 "r_mbytes_per_sec": 0, 00:15:34.245 "w_mbytes_per_sec": 0 00:15:34.245 }, 00:15:34.245 "claimed": true, 00:15:34.245 "claim_type": "exclusive_write", 00:15:34.245 "zoned": false, 00:15:34.245 "supported_io_types": { 00:15:34.245 "read": true, 00:15:34.245 "write": true, 00:15:34.245 "unmap": true, 00:15:34.245 "flush": true, 00:15:34.245 "reset": true, 00:15:34.245 "nvme_admin": false, 00:15:34.245 "nvme_io": false, 00:15:34.245 "nvme_io_md": false, 00:15:34.245 "write_zeroes": true, 00:15:34.245 "zcopy": true, 00:15:34.245 "get_zone_info": false, 00:15:34.245 "zone_management": false, 00:15:34.245 "zone_append": false, 00:15:34.245 "compare": false, 00:15:34.245 "compare_and_write": false, 00:15:34.245 "abort": true, 00:15:34.245 "seek_hole": false, 00:15:34.245 "seek_data": false, 00:15:34.245 "copy": true, 00:15:34.245 "nvme_iov_md": false 00:15:34.245 }, 00:15:34.245 "memory_domains": [ 00:15:34.245 { 00:15:34.245 "dma_device_id": "system", 00:15:34.245 "dma_device_type": 1 00:15:34.245 }, 00:15:34.245 { 00:15:34.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.245 "dma_device_type": 2 00:15:34.245 } 00:15:34.245 ], 00:15:34.245 "driver_specific": {} 00:15:34.245 } 00:15:34.245 ] 00:15:34.245 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.245 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:34.245 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.245 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.246 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.246 "name": "Existed_Raid", 00:15:34.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.246 "strip_size_kb": 64, 00:15:34.246 "state": "configuring", 00:15:34.246 "raid_level": "raid5f", 00:15:34.246 "superblock": false, 00:15:34.246 "num_base_bdevs": 3, 00:15:34.246 "num_base_bdevs_discovered": 1, 00:15:34.246 "num_base_bdevs_operational": 3, 00:15:34.246 "base_bdevs_list": [ 00:15:34.246 { 00:15:34.246 "name": "BaseBdev1", 00:15:34.246 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:34.246 "is_configured": true, 00:15:34.246 "data_offset": 0, 00:15:34.246 "data_size": 65536 00:15:34.246 }, 00:15:34.246 { 00:15:34.246 "name": "BaseBdev2", 00:15:34.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.246 "is_configured": false, 00:15:34.247 "data_offset": 0, 00:15:34.247 "data_size": 0 00:15:34.247 }, 00:15:34.247 { 00:15:34.247 "name": "BaseBdev3", 00:15:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.247 "is_configured": false, 00:15:34.247 "data_offset": 0, 00:15:34.247 "data_size": 0 00:15:34.247 } 00:15:34.247 ] 00:15:34.247 }' 00:15:34.247 11:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.247 11:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.509 [2024-11-04 11:48:00.013555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.509 [2024-11-04 11:48:00.013667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.509 [2024-11-04 11:48:00.021604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.509 [2024-11-04 11:48:00.023671] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.509 [2024-11-04 11:48:00.023718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.509 [2024-11-04 11:48:00.023729] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.509 [2024-11-04 11:48:00.023739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.509 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.767 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.767 "name": "Existed_Raid", 00:15:34.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.768 "strip_size_kb": 64, 00:15:34.768 "state": "configuring", 00:15:34.768 "raid_level": "raid5f", 00:15:34.768 "superblock": false, 00:15:34.768 "num_base_bdevs": 3, 00:15:34.768 "num_base_bdevs_discovered": 1, 00:15:34.768 "num_base_bdevs_operational": 3, 00:15:34.768 "base_bdevs_list": [ 00:15:34.768 { 00:15:34.768 "name": "BaseBdev1", 00:15:34.768 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:34.768 "is_configured": true, 00:15:34.768 "data_offset": 0, 00:15:34.768 "data_size": 65536 00:15:34.768 }, 00:15:34.768 { 00:15:34.768 "name": "BaseBdev2", 00:15:34.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.768 "is_configured": false, 00:15:34.768 "data_offset": 0, 00:15:34.768 "data_size": 0 00:15:34.768 }, 00:15:34.768 { 00:15:34.768 "name": "BaseBdev3", 00:15:34.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.768 "is_configured": false, 00:15:34.768 "data_offset": 0, 00:15:34.768 "data_size": 0 00:15:34.768 } 00:15:34.768 ] 00:15:34.768 }' 00:15:34.768 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.768 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.026 [2024-11-04 11:48:00.531685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.026 BaseBdev2 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.026 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.286 [ 00:15:35.286 { 00:15:35.286 "name": "BaseBdev2", 00:15:35.286 "aliases": [ 00:15:35.286 "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f" 00:15:35.286 ], 00:15:35.286 "product_name": "Malloc disk", 00:15:35.286 "block_size": 512, 00:15:35.286 "num_blocks": 65536, 00:15:35.286 "uuid": "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f", 00:15:35.286 "assigned_rate_limits": { 00:15:35.286 "rw_ios_per_sec": 0, 00:15:35.286 "rw_mbytes_per_sec": 0, 00:15:35.286 "r_mbytes_per_sec": 0, 00:15:35.286 "w_mbytes_per_sec": 0 00:15:35.286 }, 00:15:35.286 "claimed": true, 00:15:35.286 "claim_type": "exclusive_write", 00:15:35.286 "zoned": false, 00:15:35.286 "supported_io_types": { 00:15:35.286 "read": true, 00:15:35.286 "write": true, 00:15:35.286 "unmap": true, 00:15:35.286 "flush": true, 00:15:35.286 "reset": true, 00:15:35.286 "nvme_admin": false, 00:15:35.286 "nvme_io": false, 00:15:35.286 "nvme_io_md": false, 00:15:35.286 "write_zeroes": true, 00:15:35.286 "zcopy": true, 00:15:35.286 "get_zone_info": false, 00:15:35.286 "zone_management": false, 00:15:35.286 "zone_append": false, 00:15:35.286 "compare": false, 00:15:35.286 "compare_and_write": false, 00:15:35.286 "abort": true, 00:15:35.286 "seek_hole": false, 00:15:35.286 "seek_data": false, 00:15:35.286 "copy": true, 00:15:35.286 "nvme_iov_md": false 00:15:35.286 }, 00:15:35.286 "memory_domains": [ 00:15:35.286 { 00:15:35.286 "dma_device_id": "system", 00:15:35.286 "dma_device_type": 1 00:15:35.286 }, 00:15:35.286 { 00:15:35.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.286 "dma_device_type": 2 00:15:35.286 } 00:15:35.286 ], 00:15:35.286 "driver_specific": {} 00:15:35.286 } 00:15:35.286 ] 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.286 "name": "Existed_Raid", 00:15:35.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.286 "strip_size_kb": 64, 00:15:35.286 "state": "configuring", 00:15:35.286 "raid_level": "raid5f", 00:15:35.286 "superblock": false, 00:15:35.286 "num_base_bdevs": 3, 00:15:35.286 "num_base_bdevs_discovered": 2, 00:15:35.286 "num_base_bdevs_operational": 3, 00:15:35.286 "base_bdevs_list": [ 00:15:35.286 { 00:15:35.286 "name": "BaseBdev1", 00:15:35.286 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:35.286 "is_configured": true, 00:15:35.286 "data_offset": 0, 00:15:35.286 "data_size": 65536 00:15:35.286 }, 00:15:35.286 { 00:15:35.286 "name": "BaseBdev2", 00:15:35.286 "uuid": "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f", 00:15:35.286 "is_configured": true, 00:15:35.286 "data_offset": 0, 00:15:35.286 "data_size": 65536 00:15:35.286 }, 00:15:35.286 { 00:15:35.286 "name": "BaseBdev3", 00:15:35.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.286 "is_configured": false, 00:15:35.286 "data_offset": 0, 00:15:35.286 "data_size": 0 00:15:35.286 } 00:15:35.286 ] 00:15:35.286 }' 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.286 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.545 11:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.545 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.545 11:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.545 [2024-11-04 11:48:01.035370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.545 [2024-11-04 11:48:01.035554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:35.545 [2024-11-04 11:48:01.035592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:35.545 [2024-11-04 11:48:01.035967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:35.545 [2024-11-04 11:48:01.042283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:35.545 [2024-11-04 11:48:01.042365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:35.545 [2024-11-04 11:48:01.042872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.545 BaseBdev3 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:35.545 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.546 [ 00:15:35.546 { 00:15:35.546 "name": "BaseBdev3", 00:15:35.546 "aliases": [ 00:15:35.546 "307e123f-1ad1-4c3e-bce6-d72930d27223" 00:15:35.546 ], 00:15:35.546 "product_name": "Malloc disk", 00:15:35.546 "block_size": 512, 00:15:35.546 "num_blocks": 65536, 00:15:35.546 "uuid": "307e123f-1ad1-4c3e-bce6-d72930d27223", 00:15:35.546 "assigned_rate_limits": { 00:15:35.546 "rw_ios_per_sec": 0, 00:15:35.546 "rw_mbytes_per_sec": 0, 00:15:35.546 "r_mbytes_per_sec": 0, 00:15:35.546 "w_mbytes_per_sec": 0 00:15:35.546 }, 00:15:35.546 "claimed": true, 00:15:35.546 "claim_type": "exclusive_write", 00:15:35.546 "zoned": false, 00:15:35.546 "supported_io_types": { 00:15:35.546 "read": true, 00:15:35.546 "write": true, 00:15:35.546 "unmap": true, 00:15:35.546 "flush": true, 00:15:35.546 "reset": true, 00:15:35.546 "nvme_admin": false, 00:15:35.546 "nvme_io": false, 00:15:35.546 "nvme_io_md": false, 00:15:35.546 "write_zeroes": true, 00:15:35.546 "zcopy": true, 00:15:35.546 "get_zone_info": false, 00:15:35.546 "zone_management": false, 00:15:35.546 "zone_append": false, 00:15:35.546 "compare": false, 00:15:35.546 "compare_and_write": false, 00:15:35.546 "abort": true, 00:15:35.546 "seek_hole": false, 00:15:35.546 "seek_data": false, 00:15:35.546 "copy": true, 00:15:35.546 "nvme_iov_md": false 00:15:35.546 }, 00:15:35.546 "memory_domains": [ 00:15:35.546 { 00:15:35.546 "dma_device_id": "system", 00:15:35.546 "dma_device_type": 1 00:15:35.546 }, 00:15:35.546 { 00:15:35.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.546 "dma_device_type": 2 00:15:35.546 } 00:15:35.546 ], 00:15:35.546 "driver_specific": {} 00:15:35.546 } 00:15:35.546 ] 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.546 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.805 "name": "Existed_Raid", 00:15:35.805 "uuid": "fd3b9796-aefe-4444-888d-ba8bcd096251", 00:15:35.805 "strip_size_kb": 64, 00:15:35.805 "state": "online", 00:15:35.805 "raid_level": "raid5f", 00:15:35.805 "superblock": false, 00:15:35.805 "num_base_bdevs": 3, 00:15:35.805 "num_base_bdevs_discovered": 3, 00:15:35.805 "num_base_bdevs_operational": 3, 00:15:35.805 "base_bdevs_list": [ 00:15:35.805 { 00:15:35.805 "name": "BaseBdev1", 00:15:35.805 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:35.805 "is_configured": true, 00:15:35.805 "data_offset": 0, 00:15:35.805 "data_size": 65536 00:15:35.805 }, 00:15:35.805 { 00:15:35.805 "name": "BaseBdev2", 00:15:35.805 "uuid": "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f", 00:15:35.805 "is_configured": true, 00:15:35.805 "data_offset": 0, 00:15:35.805 "data_size": 65536 00:15:35.805 }, 00:15:35.805 { 00:15:35.805 "name": "BaseBdev3", 00:15:35.805 "uuid": "307e123f-1ad1-4c3e-bce6-d72930d27223", 00:15:35.805 "is_configured": true, 00:15:35.805 "data_offset": 0, 00:15:35.805 "data_size": 65536 00:15:35.805 } 00:15:35.805 ] 00:15:35.805 }' 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.805 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.063 [2024-11-04 11:48:01.556613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.063 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.322 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.322 "name": "Existed_Raid", 00:15:36.322 "aliases": [ 00:15:36.322 "fd3b9796-aefe-4444-888d-ba8bcd096251" 00:15:36.322 ], 00:15:36.322 "product_name": "Raid Volume", 00:15:36.322 "block_size": 512, 00:15:36.322 "num_blocks": 131072, 00:15:36.322 "uuid": "fd3b9796-aefe-4444-888d-ba8bcd096251", 00:15:36.322 "assigned_rate_limits": { 00:15:36.322 "rw_ios_per_sec": 0, 00:15:36.322 "rw_mbytes_per_sec": 0, 00:15:36.322 "r_mbytes_per_sec": 0, 00:15:36.322 "w_mbytes_per_sec": 0 00:15:36.322 }, 00:15:36.322 "claimed": false, 00:15:36.322 "zoned": false, 00:15:36.322 "supported_io_types": { 00:15:36.322 "read": true, 00:15:36.322 "write": true, 00:15:36.322 "unmap": false, 00:15:36.322 "flush": false, 00:15:36.322 "reset": true, 00:15:36.322 "nvme_admin": false, 00:15:36.322 "nvme_io": false, 00:15:36.323 "nvme_io_md": false, 00:15:36.323 "write_zeroes": true, 00:15:36.323 "zcopy": false, 00:15:36.323 "get_zone_info": false, 00:15:36.323 "zone_management": false, 00:15:36.323 "zone_append": false, 00:15:36.323 "compare": false, 00:15:36.323 "compare_and_write": false, 00:15:36.323 "abort": false, 00:15:36.323 "seek_hole": false, 00:15:36.323 "seek_data": false, 00:15:36.323 "copy": false, 00:15:36.323 "nvme_iov_md": false 00:15:36.323 }, 00:15:36.323 "driver_specific": { 00:15:36.323 "raid": { 00:15:36.323 "uuid": "fd3b9796-aefe-4444-888d-ba8bcd096251", 00:15:36.323 "strip_size_kb": 64, 00:15:36.323 "state": "online", 00:15:36.323 "raid_level": "raid5f", 00:15:36.323 "superblock": false, 00:15:36.323 "num_base_bdevs": 3, 00:15:36.323 "num_base_bdevs_discovered": 3, 00:15:36.323 "num_base_bdevs_operational": 3, 00:15:36.323 "base_bdevs_list": [ 00:15:36.323 { 00:15:36.323 "name": "BaseBdev1", 00:15:36.323 "uuid": "e7ee7448-eca5-4d00-b039-57b1bb8afa0d", 00:15:36.323 "is_configured": true, 00:15:36.323 "data_offset": 0, 00:15:36.323 "data_size": 65536 00:15:36.323 }, 00:15:36.323 { 00:15:36.323 "name": "BaseBdev2", 00:15:36.323 "uuid": "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f", 00:15:36.323 "is_configured": true, 00:15:36.323 "data_offset": 0, 00:15:36.323 "data_size": 65536 00:15:36.323 }, 00:15:36.323 { 00:15:36.323 "name": "BaseBdev3", 00:15:36.323 "uuid": "307e123f-1ad1-4c3e-bce6-d72930d27223", 00:15:36.323 "is_configured": true, 00:15:36.323 "data_offset": 0, 00:15:36.323 "data_size": 65536 00:15:36.323 } 00:15:36.323 ] 00:15:36.323 } 00:15:36.323 } 00:15:36.323 }' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.323 BaseBdev2 00:15:36.323 BaseBdev3' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.323 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.323 [2024-11-04 11:48:01.812023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.582 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.582 "name": "Existed_Raid", 00:15:36.582 "uuid": "fd3b9796-aefe-4444-888d-ba8bcd096251", 00:15:36.582 "strip_size_kb": 64, 00:15:36.582 "state": "online", 00:15:36.582 "raid_level": "raid5f", 00:15:36.582 "superblock": false, 00:15:36.582 "num_base_bdevs": 3, 00:15:36.582 "num_base_bdevs_discovered": 2, 00:15:36.582 "num_base_bdevs_operational": 2, 00:15:36.582 "base_bdevs_list": [ 00:15:36.582 { 00:15:36.582 "name": null, 00:15:36.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.582 "is_configured": false, 00:15:36.582 "data_offset": 0, 00:15:36.582 "data_size": 65536 00:15:36.582 }, 00:15:36.582 { 00:15:36.582 "name": "BaseBdev2", 00:15:36.582 "uuid": "aba4c355-d9fd-4a38-bac9-9dbaf7e43e6f", 00:15:36.583 "is_configured": true, 00:15:36.583 "data_offset": 0, 00:15:36.583 "data_size": 65536 00:15:36.583 }, 00:15:36.583 { 00:15:36.583 "name": "BaseBdev3", 00:15:36.583 "uuid": "307e123f-1ad1-4c3e-bce6-d72930d27223", 00:15:36.583 "is_configured": true, 00:15:36.583 "data_offset": 0, 00:15:36.583 "data_size": 65536 00:15:36.583 } 00:15:36.583 ] 00:15:36.583 }' 00:15:36.583 11:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.583 11:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.904 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.904 [2024-11-04 11:48:02.380637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.904 [2024-11-04 11:48:02.380781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.164 [2024-11-04 11:48:02.476567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.164 [2024-11-04 11:48:02.536604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.164 [2024-11-04 11:48:02.536720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.164 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.423 BaseBdev2 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.423 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.423 [ 00:15:37.423 { 00:15:37.423 "name": "BaseBdev2", 00:15:37.423 "aliases": [ 00:15:37.423 "cfbe08e0-8672-44e8-93e3-aeb55f923b3e" 00:15:37.423 ], 00:15:37.423 "product_name": "Malloc disk", 00:15:37.423 "block_size": 512, 00:15:37.423 "num_blocks": 65536, 00:15:37.423 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:37.423 "assigned_rate_limits": { 00:15:37.423 "rw_ios_per_sec": 0, 00:15:37.423 "rw_mbytes_per_sec": 0, 00:15:37.423 "r_mbytes_per_sec": 0, 00:15:37.423 "w_mbytes_per_sec": 0 00:15:37.423 }, 00:15:37.423 "claimed": false, 00:15:37.423 "zoned": false, 00:15:37.423 "supported_io_types": { 00:15:37.423 "read": true, 00:15:37.423 "write": true, 00:15:37.423 "unmap": true, 00:15:37.423 "flush": true, 00:15:37.423 "reset": true, 00:15:37.423 "nvme_admin": false, 00:15:37.423 "nvme_io": false, 00:15:37.423 "nvme_io_md": false, 00:15:37.423 "write_zeroes": true, 00:15:37.423 "zcopy": true, 00:15:37.423 "get_zone_info": false, 00:15:37.423 "zone_management": false, 00:15:37.423 "zone_append": false, 00:15:37.423 "compare": false, 00:15:37.423 "compare_and_write": false, 00:15:37.423 "abort": true, 00:15:37.423 "seek_hole": false, 00:15:37.423 "seek_data": false, 00:15:37.423 "copy": true, 00:15:37.423 "nvme_iov_md": false 00:15:37.423 }, 00:15:37.423 "memory_domains": [ 00:15:37.423 { 00:15:37.423 "dma_device_id": "system", 00:15:37.423 "dma_device_type": 1 00:15:37.423 }, 00:15:37.423 { 00:15:37.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.424 "dma_device_type": 2 00:15:37.424 } 00:15:37.424 ], 00:15:37.424 "driver_specific": {} 00:15:37.424 } 00:15:37.424 ] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 BaseBdev3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 [ 00:15:37.424 { 00:15:37.424 "name": "BaseBdev3", 00:15:37.424 "aliases": [ 00:15:37.424 "b4a7f657-12b2-4bce-8572-805a8b426034" 00:15:37.424 ], 00:15:37.424 "product_name": "Malloc disk", 00:15:37.424 "block_size": 512, 00:15:37.424 "num_blocks": 65536, 00:15:37.424 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:37.424 "assigned_rate_limits": { 00:15:37.424 "rw_ios_per_sec": 0, 00:15:37.424 "rw_mbytes_per_sec": 0, 00:15:37.424 "r_mbytes_per_sec": 0, 00:15:37.424 "w_mbytes_per_sec": 0 00:15:37.424 }, 00:15:37.424 "claimed": false, 00:15:37.424 "zoned": false, 00:15:37.424 "supported_io_types": { 00:15:37.424 "read": true, 00:15:37.424 "write": true, 00:15:37.424 "unmap": true, 00:15:37.424 "flush": true, 00:15:37.424 "reset": true, 00:15:37.424 "nvme_admin": false, 00:15:37.424 "nvme_io": false, 00:15:37.424 "nvme_io_md": false, 00:15:37.424 "write_zeroes": true, 00:15:37.424 "zcopy": true, 00:15:37.424 "get_zone_info": false, 00:15:37.424 "zone_management": false, 00:15:37.424 "zone_append": false, 00:15:37.424 "compare": false, 00:15:37.424 "compare_and_write": false, 00:15:37.424 "abort": true, 00:15:37.424 "seek_hole": false, 00:15:37.424 "seek_data": false, 00:15:37.424 "copy": true, 00:15:37.424 "nvme_iov_md": false 00:15:37.424 }, 00:15:37.424 "memory_domains": [ 00:15:37.424 { 00:15:37.424 "dma_device_id": "system", 00:15:37.424 "dma_device_type": 1 00:15:37.424 }, 00:15:37.424 { 00:15:37.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.424 "dma_device_type": 2 00:15:37.424 } 00:15:37.424 ], 00:15:37.424 "driver_specific": {} 00:15:37.424 } 00:15:37.424 ] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 [2024-11-04 11:48:02.866583] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.424 [2024-11-04 11:48:02.866677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.424 [2024-11-04 11:48:02.866721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.424 [2024-11-04 11:48:02.868819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.424 "name": "Existed_Raid", 00:15:37.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.424 "strip_size_kb": 64, 00:15:37.424 "state": "configuring", 00:15:37.424 "raid_level": "raid5f", 00:15:37.424 "superblock": false, 00:15:37.424 "num_base_bdevs": 3, 00:15:37.424 "num_base_bdevs_discovered": 2, 00:15:37.424 "num_base_bdevs_operational": 3, 00:15:37.424 "base_bdevs_list": [ 00:15:37.424 { 00:15:37.424 "name": "BaseBdev1", 00:15:37.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.424 "is_configured": false, 00:15:37.424 "data_offset": 0, 00:15:37.424 "data_size": 0 00:15:37.424 }, 00:15:37.424 { 00:15:37.424 "name": "BaseBdev2", 00:15:37.424 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:37.424 "is_configured": true, 00:15:37.424 "data_offset": 0, 00:15:37.424 "data_size": 65536 00:15:37.424 }, 00:15:37.424 { 00:15:37.424 "name": "BaseBdev3", 00:15:37.424 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:37.424 "is_configured": true, 00:15:37.424 "data_offset": 0, 00:15:37.424 "data_size": 65536 00:15:37.424 } 00:15:37.424 ] 00:15:37.424 }' 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.424 11:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.991 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:37.991 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.991 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.991 [2024-11-04 11:48:03.318345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.991 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.992 "name": "Existed_Raid", 00:15:37.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.992 "strip_size_kb": 64, 00:15:37.992 "state": "configuring", 00:15:37.992 "raid_level": "raid5f", 00:15:37.992 "superblock": false, 00:15:37.992 "num_base_bdevs": 3, 00:15:37.992 "num_base_bdevs_discovered": 1, 00:15:37.992 "num_base_bdevs_operational": 3, 00:15:37.992 "base_bdevs_list": [ 00:15:37.992 { 00:15:37.992 "name": "BaseBdev1", 00:15:37.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.992 "is_configured": false, 00:15:37.992 "data_offset": 0, 00:15:37.992 "data_size": 0 00:15:37.992 }, 00:15:37.992 { 00:15:37.992 "name": null, 00:15:37.992 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:37.992 "is_configured": false, 00:15:37.992 "data_offset": 0, 00:15:37.992 "data_size": 65536 00:15:37.992 }, 00:15:37.992 { 00:15:37.992 "name": "BaseBdev3", 00:15:37.992 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:37.992 "is_configured": true, 00:15:37.992 "data_offset": 0, 00:15:37.992 "data_size": 65536 00:15:37.992 } 00:15:37.992 ] 00:15:37.992 }' 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.992 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.250 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.250 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.250 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.250 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 [2024-11-04 11:48:03.840043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.509 BaseBdev1 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.509 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 [ 00:15:38.509 { 00:15:38.509 "name": "BaseBdev1", 00:15:38.509 "aliases": [ 00:15:38.509 "22e3c021-740d-4d96-861b-110ef04e884f" 00:15:38.509 ], 00:15:38.509 "product_name": "Malloc disk", 00:15:38.509 "block_size": 512, 00:15:38.509 "num_blocks": 65536, 00:15:38.509 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:38.509 "assigned_rate_limits": { 00:15:38.509 "rw_ios_per_sec": 0, 00:15:38.509 "rw_mbytes_per_sec": 0, 00:15:38.509 "r_mbytes_per_sec": 0, 00:15:38.509 "w_mbytes_per_sec": 0 00:15:38.509 }, 00:15:38.509 "claimed": true, 00:15:38.509 "claim_type": "exclusive_write", 00:15:38.509 "zoned": false, 00:15:38.509 "supported_io_types": { 00:15:38.509 "read": true, 00:15:38.509 "write": true, 00:15:38.509 "unmap": true, 00:15:38.509 "flush": true, 00:15:38.509 "reset": true, 00:15:38.509 "nvme_admin": false, 00:15:38.509 "nvme_io": false, 00:15:38.509 "nvme_io_md": false, 00:15:38.509 "write_zeroes": true, 00:15:38.509 "zcopy": true, 00:15:38.509 "get_zone_info": false, 00:15:38.509 "zone_management": false, 00:15:38.509 "zone_append": false, 00:15:38.509 "compare": false, 00:15:38.509 "compare_and_write": false, 00:15:38.509 "abort": true, 00:15:38.510 "seek_hole": false, 00:15:38.510 "seek_data": false, 00:15:38.510 "copy": true, 00:15:38.510 "nvme_iov_md": false 00:15:38.510 }, 00:15:38.510 "memory_domains": [ 00:15:38.510 { 00:15:38.510 "dma_device_id": "system", 00:15:38.510 "dma_device_type": 1 00:15:38.510 }, 00:15:38.510 { 00:15:38.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.510 "dma_device_type": 2 00:15:38.510 } 00:15:38.510 ], 00:15:38.510 "driver_specific": {} 00:15:38.510 } 00:15:38.510 ] 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.510 "name": "Existed_Raid", 00:15:38.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.510 "strip_size_kb": 64, 00:15:38.510 "state": "configuring", 00:15:38.510 "raid_level": "raid5f", 00:15:38.510 "superblock": false, 00:15:38.510 "num_base_bdevs": 3, 00:15:38.510 "num_base_bdevs_discovered": 2, 00:15:38.510 "num_base_bdevs_operational": 3, 00:15:38.510 "base_bdevs_list": [ 00:15:38.510 { 00:15:38.510 "name": "BaseBdev1", 00:15:38.510 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:38.510 "is_configured": true, 00:15:38.510 "data_offset": 0, 00:15:38.510 "data_size": 65536 00:15:38.510 }, 00:15:38.510 { 00:15:38.510 "name": null, 00:15:38.510 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:38.510 "is_configured": false, 00:15:38.510 "data_offset": 0, 00:15:38.510 "data_size": 65536 00:15:38.510 }, 00:15:38.510 { 00:15:38.510 "name": "BaseBdev3", 00:15:38.510 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:38.510 "is_configured": true, 00:15:38.510 "data_offset": 0, 00:15:38.510 "data_size": 65536 00:15:38.510 } 00:15:38.510 ] 00:15:38.510 }' 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.510 11:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.079 [2024-11-04 11:48:04.371226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.079 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.080 "name": "Existed_Raid", 00:15:39.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.080 "strip_size_kb": 64, 00:15:39.080 "state": "configuring", 00:15:39.080 "raid_level": "raid5f", 00:15:39.080 "superblock": false, 00:15:39.080 "num_base_bdevs": 3, 00:15:39.080 "num_base_bdevs_discovered": 1, 00:15:39.080 "num_base_bdevs_operational": 3, 00:15:39.080 "base_bdevs_list": [ 00:15:39.080 { 00:15:39.080 "name": "BaseBdev1", 00:15:39.080 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:39.080 "is_configured": true, 00:15:39.080 "data_offset": 0, 00:15:39.080 "data_size": 65536 00:15:39.080 }, 00:15:39.080 { 00:15:39.080 "name": null, 00:15:39.080 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:39.080 "is_configured": false, 00:15:39.080 "data_offset": 0, 00:15:39.080 "data_size": 65536 00:15:39.080 }, 00:15:39.080 { 00:15:39.080 "name": null, 00:15:39.080 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:39.080 "is_configured": false, 00:15:39.080 "data_offset": 0, 00:15:39.080 "data_size": 65536 00:15:39.080 } 00:15:39.080 ] 00:15:39.080 }' 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.080 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.339 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.340 [2024-11-04 11:48:04.846458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.340 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.599 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.599 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.599 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.599 "name": "Existed_Raid", 00:15:39.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.599 "strip_size_kb": 64, 00:15:39.599 "state": "configuring", 00:15:39.599 "raid_level": "raid5f", 00:15:39.599 "superblock": false, 00:15:39.599 "num_base_bdevs": 3, 00:15:39.599 "num_base_bdevs_discovered": 2, 00:15:39.599 "num_base_bdevs_operational": 3, 00:15:39.599 "base_bdevs_list": [ 00:15:39.599 { 00:15:39.599 "name": "BaseBdev1", 00:15:39.599 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:39.599 "is_configured": true, 00:15:39.599 "data_offset": 0, 00:15:39.599 "data_size": 65536 00:15:39.599 }, 00:15:39.599 { 00:15:39.599 "name": null, 00:15:39.599 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:39.599 "is_configured": false, 00:15:39.599 "data_offset": 0, 00:15:39.599 "data_size": 65536 00:15:39.599 }, 00:15:39.599 { 00:15:39.599 "name": "BaseBdev3", 00:15:39.599 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:39.599 "is_configured": true, 00:15:39.599 "data_offset": 0, 00:15:39.599 "data_size": 65536 00:15:39.599 } 00:15:39.599 ] 00:15:39.599 }' 00:15:39.599 11:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.599 11:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.857 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.857 [2024-11-04 11:48:05.365575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.116 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.116 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.116 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.116 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.116 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.117 "name": "Existed_Raid", 00:15:40.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.117 "strip_size_kb": 64, 00:15:40.117 "state": "configuring", 00:15:40.117 "raid_level": "raid5f", 00:15:40.117 "superblock": false, 00:15:40.117 "num_base_bdevs": 3, 00:15:40.117 "num_base_bdevs_discovered": 1, 00:15:40.117 "num_base_bdevs_operational": 3, 00:15:40.117 "base_bdevs_list": [ 00:15:40.117 { 00:15:40.117 "name": null, 00:15:40.117 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:40.117 "is_configured": false, 00:15:40.117 "data_offset": 0, 00:15:40.117 "data_size": 65536 00:15:40.117 }, 00:15:40.117 { 00:15:40.117 "name": null, 00:15:40.117 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:40.117 "is_configured": false, 00:15:40.117 "data_offset": 0, 00:15:40.117 "data_size": 65536 00:15:40.117 }, 00:15:40.117 { 00:15:40.117 "name": "BaseBdev3", 00:15:40.117 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:40.117 "is_configured": true, 00:15:40.117 "data_offset": 0, 00:15:40.117 "data_size": 65536 00:15:40.117 } 00:15:40.117 ] 00:15:40.117 }' 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.117 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.376 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.376 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.376 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.376 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.376 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 [2024-11-04 11:48:05.913652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.636 "name": "Existed_Raid", 00:15:40.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.636 "strip_size_kb": 64, 00:15:40.636 "state": "configuring", 00:15:40.636 "raid_level": "raid5f", 00:15:40.636 "superblock": false, 00:15:40.636 "num_base_bdevs": 3, 00:15:40.636 "num_base_bdevs_discovered": 2, 00:15:40.636 "num_base_bdevs_operational": 3, 00:15:40.636 "base_bdevs_list": [ 00:15:40.636 { 00:15:40.636 "name": null, 00:15:40.636 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:40.636 "is_configured": false, 00:15:40.636 "data_offset": 0, 00:15:40.636 "data_size": 65536 00:15:40.636 }, 00:15:40.636 { 00:15:40.636 "name": "BaseBdev2", 00:15:40.636 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:40.636 "is_configured": true, 00:15:40.636 "data_offset": 0, 00:15:40.636 "data_size": 65536 00:15:40.636 }, 00:15:40.636 { 00:15:40.636 "name": "BaseBdev3", 00:15:40.636 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:40.636 "is_configured": true, 00:15:40.636 "data_offset": 0, 00:15:40.636 "data_size": 65536 00:15:40.636 } 00:15:40.636 ] 00:15:40.636 }' 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.636 11:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.896 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.896 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:40.897 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.897 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.897 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 22e3c021-740d-4d96-861b-110ef04e884f 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.156 [2024-11-04 11:48:06.506310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.156 [2024-11-04 11:48:06.506447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.156 [2024-11-04 11:48:06.506464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:41.156 [2024-11-04 11:48:06.506734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:41.156 [2024-11-04 11:48:06.512060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.156 [2024-11-04 11:48:06.512128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.156 [2024-11-04 11:48:06.512456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.156 NewBaseBdev 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.156 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.157 [ 00:15:41.157 { 00:15:41.157 "name": "NewBaseBdev", 00:15:41.157 "aliases": [ 00:15:41.157 "22e3c021-740d-4d96-861b-110ef04e884f" 00:15:41.157 ], 00:15:41.157 "product_name": "Malloc disk", 00:15:41.157 "block_size": 512, 00:15:41.157 "num_blocks": 65536, 00:15:41.157 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:41.157 "assigned_rate_limits": { 00:15:41.157 "rw_ios_per_sec": 0, 00:15:41.157 "rw_mbytes_per_sec": 0, 00:15:41.157 "r_mbytes_per_sec": 0, 00:15:41.157 "w_mbytes_per_sec": 0 00:15:41.157 }, 00:15:41.157 "claimed": true, 00:15:41.157 "claim_type": "exclusive_write", 00:15:41.157 "zoned": false, 00:15:41.157 "supported_io_types": { 00:15:41.157 "read": true, 00:15:41.157 "write": true, 00:15:41.157 "unmap": true, 00:15:41.157 "flush": true, 00:15:41.157 "reset": true, 00:15:41.157 "nvme_admin": false, 00:15:41.157 "nvme_io": false, 00:15:41.157 "nvme_io_md": false, 00:15:41.157 "write_zeroes": true, 00:15:41.157 "zcopy": true, 00:15:41.157 "get_zone_info": false, 00:15:41.157 "zone_management": false, 00:15:41.157 "zone_append": false, 00:15:41.157 "compare": false, 00:15:41.157 "compare_and_write": false, 00:15:41.157 "abort": true, 00:15:41.157 "seek_hole": false, 00:15:41.157 "seek_data": false, 00:15:41.157 "copy": true, 00:15:41.157 "nvme_iov_md": false 00:15:41.157 }, 00:15:41.157 "memory_domains": [ 00:15:41.157 { 00:15:41.157 "dma_device_id": "system", 00:15:41.157 "dma_device_type": 1 00:15:41.157 }, 00:15:41.157 { 00:15:41.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.157 "dma_device_type": 2 00:15:41.157 } 00:15:41.157 ], 00:15:41.157 "driver_specific": {} 00:15:41.157 } 00:15:41.157 ] 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.157 "name": "Existed_Raid", 00:15:41.157 "uuid": "00df5bff-d294-4b3d-b794-90c5c21df3f4", 00:15:41.157 "strip_size_kb": 64, 00:15:41.157 "state": "online", 00:15:41.157 "raid_level": "raid5f", 00:15:41.157 "superblock": false, 00:15:41.157 "num_base_bdevs": 3, 00:15:41.157 "num_base_bdevs_discovered": 3, 00:15:41.157 "num_base_bdevs_operational": 3, 00:15:41.157 "base_bdevs_list": [ 00:15:41.157 { 00:15:41.157 "name": "NewBaseBdev", 00:15:41.157 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:41.157 "is_configured": true, 00:15:41.157 "data_offset": 0, 00:15:41.157 "data_size": 65536 00:15:41.157 }, 00:15:41.157 { 00:15:41.157 "name": "BaseBdev2", 00:15:41.157 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:41.157 "is_configured": true, 00:15:41.157 "data_offset": 0, 00:15:41.157 "data_size": 65536 00:15:41.157 }, 00:15:41.157 { 00:15:41.157 "name": "BaseBdev3", 00:15:41.157 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:41.157 "is_configured": true, 00:15:41.157 "data_offset": 0, 00:15:41.157 "data_size": 65536 00:15:41.157 } 00:15:41.157 ] 00:15:41.157 }' 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.157 11:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.743 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.743 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.743 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.743 11:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.743 [2024-11-04 11:48:07.014652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.743 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.743 "name": "Existed_Raid", 00:15:41.743 "aliases": [ 00:15:41.743 "00df5bff-d294-4b3d-b794-90c5c21df3f4" 00:15:41.743 ], 00:15:41.743 "product_name": "Raid Volume", 00:15:41.743 "block_size": 512, 00:15:41.743 "num_blocks": 131072, 00:15:41.743 "uuid": "00df5bff-d294-4b3d-b794-90c5c21df3f4", 00:15:41.743 "assigned_rate_limits": { 00:15:41.743 "rw_ios_per_sec": 0, 00:15:41.743 "rw_mbytes_per_sec": 0, 00:15:41.743 "r_mbytes_per_sec": 0, 00:15:41.743 "w_mbytes_per_sec": 0 00:15:41.743 }, 00:15:41.743 "claimed": false, 00:15:41.743 "zoned": false, 00:15:41.743 "supported_io_types": { 00:15:41.744 "read": true, 00:15:41.744 "write": true, 00:15:41.744 "unmap": false, 00:15:41.744 "flush": false, 00:15:41.744 "reset": true, 00:15:41.744 "nvme_admin": false, 00:15:41.744 "nvme_io": false, 00:15:41.744 "nvme_io_md": false, 00:15:41.744 "write_zeroes": true, 00:15:41.744 "zcopy": false, 00:15:41.744 "get_zone_info": false, 00:15:41.744 "zone_management": false, 00:15:41.744 "zone_append": false, 00:15:41.744 "compare": false, 00:15:41.744 "compare_and_write": false, 00:15:41.744 "abort": false, 00:15:41.744 "seek_hole": false, 00:15:41.744 "seek_data": false, 00:15:41.744 "copy": false, 00:15:41.744 "nvme_iov_md": false 00:15:41.744 }, 00:15:41.744 "driver_specific": { 00:15:41.744 "raid": { 00:15:41.744 "uuid": "00df5bff-d294-4b3d-b794-90c5c21df3f4", 00:15:41.744 "strip_size_kb": 64, 00:15:41.744 "state": "online", 00:15:41.744 "raid_level": "raid5f", 00:15:41.744 "superblock": false, 00:15:41.744 "num_base_bdevs": 3, 00:15:41.744 "num_base_bdevs_discovered": 3, 00:15:41.744 "num_base_bdevs_operational": 3, 00:15:41.744 "base_bdevs_list": [ 00:15:41.744 { 00:15:41.744 "name": "NewBaseBdev", 00:15:41.744 "uuid": "22e3c021-740d-4d96-861b-110ef04e884f", 00:15:41.744 "is_configured": true, 00:15:41.744 "data_offset": 0, 00:15:41.744 "data_size": 65536 00:15:41.744 }, 00:15:41.744 { 00:15:41.744 "name": "BaseBdev2", 00:15:41.744 "uuid": "cfbe08e0-8672-44e8-93e3-aeb55f923b3e", 00:15:41.744 "is_configured": true, 00:15:41.744 "data_offset": 0, 00:15:41.744 "data_size": 65536 00:15:41.744 }, 00:15:41.744 { 00:15:41.744 "name": "BaseBdev3", 00:15:41.744 "uuid": "b4a7f657-12b2-4bce-8572-805a8b426034", 00:15:41.744 "is_configured": true, 00:15:41.744 "data_offset": 0, 00:15:41.744 "data_size": 65536 00:15:41.744 } 00:15:41.744 ] 00:15:41.744 } 00:15:41.744 } 00:15:41.744 }' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:41.744 BaseBdev2 00:15:41.744 BaseBdev3' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.004 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.005 [2024-11-04 11:48:07.289928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.005 [2024-11-04 11:48:07.290004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.005 [2024-11-04 11:48:07.290115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.005 [2024-11-04 11:48:07.290441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.005 [2024-11-04 11:48:07.290502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80129 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80129 ']' 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80129 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80129 00:15:42.005 killing process with pid 80129 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80129' 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80129 00:15:42.005 [2024-11-04 11:48:07.332547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.005 11:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80129 00:15:42.361 [2024-11-04 11:48:07.650433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.297 11:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:43.297 00:15:43.297 real 0m10.752s 00:15:43.297 user 0m17.049s 00:15:43.297 sys 0m1.928s 00:15:43.297 11:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.297 11:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.297 ************************************ 00:15:43.297 END TEST raid5f_state_function_test 00:15:43.297 ************************************ 00:15:43.557 11:48:08 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:43.558 11:48:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:43.558 11:48:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.558 11:48:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.558 ************************************ 00:15:43.558 START TEST raid5f_state_function_test_sb 00:15:43.558 ************************************ 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80752 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80752' 00:15:43.558 Process raid pid: 80752 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80752 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80752 ']' 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.558 11:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.558 [2024-11-04 11:48:08.965262] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:15:43.558 [2024-11-04 11:48:08.965524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.817 [2024-11-04 11:48:09.125214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.817 [2024-11-04 11:48:09.244868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.077 [2024-11-04 11:48:09.461176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.077 [2024-11-04 11:48:09.461270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.337 [2024-11-04 11:48:09.841514] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.337 [2024-11-04 11:48:09.841631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.337 [2024-11-04 11:48:09.841646] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.337 [2024-11-04 11:48:09.841656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.337 [2024-11-04 11:48:09.841662] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.337 [2024-11-04 11:48:09.841671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.337 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.338 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.599 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.599 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.599 "name": "Existed_Raid", 00:15:44.599 "uuid": "4d214b7e-527a-4895-b2fd-00036124c1b8", 00:15:44.599 "strip_size_kb": 64, 00:15:44.599 "state": "configuring", 00:15:44.599 "raid_level": "raid5f", 00:15:44.599 "superblock": true, 00:15:44.599 "num_base_bdevs": 3, 00:15:44.599 "num_base_bdevs_discovered": 0, 00:15:44.599 "num_base_bdevs_operational": 3, 00:15:44.599 "base_bdevs_list": [ 00:15:44.599 { 00:15:44.599 "name": "BaseBdev1", 00:15:44.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.599 "is_configured": false, 00:15:44.599 "data_offset": 0, 00:15:44.599 "data_size": 0 00:15:44.599 }, 00:15:44.599 { 00:15:44.599 "name": "BaseBdev2", 00:15:44.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.599 "is_configured": false, 00:15:44.599 "data_offset": 0, 00:15:44.599 "data_size": 0 00:15:44.599 }, 00:15:44.599 { 00:15:44.599 "name": "BaseBdev3", 00:15:44.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.599 "is_configured": false, 00:15:44.599 "data_offset": 0, 00:15:44.599 "data_size": 0 00:15:44.599 } 00:15:44.599 ] 00:15:44.599 }' 00:15:44.599 11:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.599 11:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.858 [2024-11-04 11:48:10.300652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.858 [2024-11-04 11:48:10.300747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.858 [2024-11-04 11:48:10.312638] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.858 [2024-11-04 11:48:10.312726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.858 [2024-11-04 11:48:10.312754] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.858 [2024-11-04 11:48:10.312778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.858 [2024-11-04 11:48:10.312796] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:44.858 [2024-11-04 11:48:10.312818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.858 [2024-11-04 11:48:10.362910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.858 BaseBdev1 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.858 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.117 [ 00:15:45.117 { 00:15:45.117 "name": "BaseBdev1", 00:15:45.117 "aliases": [ 00:15:45.117 "b05729ea-b286-4f42-99f2-461914aada96" 00:15:45.117 ], 00:15:45.117 "product_name": "Malloc disk", 00:15:45.117 "block_size": 512, 00:15:45.117 "num_blocks": 65536, 00:15:45.117 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:45.117 "assigned_rate_limits": { 00:15:45.117 "rw_ios_per_sec": 0, 00:15:45.117 "rw_mbytes_per_sec": 0, 00:15:45.117 "r_mbytes_per_sec": 0, 00:15:45.117 "w_mbytes_per_sec": 0 00:15:45.117 }, 00:15:45.117 "claimed": true, 00:15:45.117 "claim_type": "exclusive_write", 00:15:45.117 "zoned": false, 00:15:45.117 "supported_io_types": { 00:15:45.117 "read": true, 00:15:45.117 "write": true, 00:15:45.117 "unmap": true, 00:15:45.117 "flush": true, 00:15:45.117 "reset": true, 00:15:45.117 "nvme_admin": false, 00:15:45.117 "nvme_io": false, 00:15:45.117 "nvme_io_md": false, 00:15:45.117 "write_zeroes": true, 00:15:45.117 "zcopy": true, 00:15:45.117 "get_zone_info": false, 00:15:45.117 "zone_management": false, 00:15:45.117 "zone_append": false, 00:15:45.117 "compare": false, 00:15:45.117 "compare_and_write": false, 00:15:45.117 "abort": true, 00:15:45.117 "seek_hole": false, 00:15:45.117 "seek_data": false, 00:15:45.117 "copy": true, 00:15:45.117 "nvme_iov_md": false 00:15:45.117 }, 00:15:45.117 "memory_domains": [ 00:15:45.117 { 00:15:45.117 "dma_device_id": "system", 00:15:45.117 "dma_device_type": 1 00:15:45.117 }, 00:15:45.117 { 00:15:45.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.117 "dma_device_type": 2 00:15:45.117 } 00:15:45.117 ], 00:15:45.117 "driver_specific": {} 00:15:45.117 } 00:15:45.117 ] 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.117 "name": "Existed_Raid", 00:15:45.117 "uuid": "a4a657d9-02c0-47bc-bbe4-88f2d99ae808", 00:15:45.117 "strip_size_kb": 64, 00:15:45.117 "state": "configuring", 00:15:45.117 "raid_level": "raid5f", 00:15:45.117 "superblock": true, 00:15:45.117 "num_base_bdevs": 3, 00:15:45.117 "num_base_bdevs_discovered": 1, 00:15:45.117 "num_base_bdevs_operational": 3, 00:15:45.117 "base_bdevs_list": [ 00:15:45.117 { 00:15:45.117 "name": "BaseBdev1", 00:15:45.117 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:45.117 "is_configured": true, 00:15:45.117 "data_offset": 2048, 00:15:45.117 "data_size": 63488 00:15:45.117 }, 00:15:45.117 { 00:15:45.117 "name": "BaseBdev2", 00:15:45.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.117 "is_configured": false, 00:15:45.117 "data_offset": 0, 00:15:45.117 "data_size": 0 00:15:45.117 }, 00:15:45.117 { 00:15:45.117 "name": "BaseBdev3", 00:15:45.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.117 "is_configured": false, 00:15:45.117 "data_offset": 0, 00:15:45.117 "data_size": 0 00:15:45.117 } 00:15:45.117 ] 00:15:45.117 }' 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.117 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.377 [2024-11-04 11:48:10.838178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.377 [2024-11-04 11:48:10.838298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.377 [2024-11-04 11:48:10.850204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.377 [2024-11-04 11:48:10.852120] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.377 [2024-11-04 11:48:10.852200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.377 [2024-11-04 11:48:10.852228] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.377 [2024-11-04 11:48:10.852251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.377 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.636 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.636 "name": "Existed_Raid", 00:15:45.636 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:45.636 "strip_size_kb": 64, 00:15:45.636 "state": "configuring", 00:15:45.636 "raid_level": "raid5f", 00:15:45.636 "superblock": true, 00:15:45.636 "num_base_bdevs": 3, 00:15:45.636 "num_base_bdevs_discovered": 1, 00:15:45.636 "num_base_bdevs_operational": 3, 00:15:45.636 "base_bdevs_list": [ 00:15:45.636 { 00:15:45.636 "name": "BaseBdev1", 00:15:45.636 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:45.636 "is_configured": true, 00:15:45.636 "data_offset": 2048, 00:15:45.636 "data_size": 63488 00:15:45.636 }, 00:15:45.636 { 00:15:45.636 "name": "BaseBdev2", 00:15:45.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.636 "is_configured": false, 00:15:45.636 "data_offset": 0, 00:15:45.636 "data_size": 0 00:15:45.636 }, 00:15:45.636 { 00:15:45.636 "name": "BaseBdev3", 00:15:45.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.636 "is_configured": false, 00:15:45.636 "data_offset": 0, 00:15:45.636 "data_size": 0 00:15:45.636 } 00:15:45.636 ] 00:15:45.636 }' 00:15:45.636 11:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.636 11:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.895 [2024-11-04 11:48:11.313301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.895 BaseBdev2 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.895 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.896 [ 00:15:45.896 { 00:15:45.896 "name": "BaseBdev2", 00:15:45.896 "aliases": [ 00:15:45.896 "bed68f63-f170-46cd-8459-3e1770f238e6" 00:15:45.896 ], 00:15:45.896 "product_name": "Malloc disk", 00:15:45.896 "block_size": 512, 00:15:45.896 "num_blocks": 65536, 00:15:45.896 "uuid": "bed68f63-f170-46cd-8459-3e1770f238e6", 00:15:45.896 "assigned_rate_limits": { 00:15:45.896 "rw_ios_per_sec": 0, 00:15:45.896 "rw_mbytes_per_sec": 0, 00:15:45.896 "r_mbytes_per_sec": 0, 00:15:45.896 "w_mbytes_per_sec": 0 00:15:45.896 }, 00:15:45.896 "claimed": true, 00:15:45.896 "claim_type": "exclusive_write", 00:15:45.896 "zoned": false, 00:15:45.896 "supported_io_types": { 00:15:45.896 "read": true, 00:15:45.896 "write": true, 00:15:45.896 "unmap": true, 00:15:45.896 "flush": true, 00:15:45.896 "reset": true, 00:15:45.896 "nvme_admin": false, 00:15:45.896 "nvme_io": false, 00:15:45.896 "nvme_io_md": false, 00:15:45.896 "write_zeroes": true, 00:15:45.896 "zcopy": true, 00:15:45.896 "get_zone_info": false, 00:15:45.896 "zone_management": false, 00:15:45.896 "zone_append": false, 00:15:45.896 "compare": false, 00:15:45.896 "compare_and_write": false, 00:15:45.896 "abort": true, 00:15:45.896 "seek_hole": false, 00:15:45.896 "seek_data": false, 00:15:45.896 "copy": true, 00:15:45.896 "nvme_iov_md": false 00:15:45.896 }, 00:15:45.896 "memory_domains": [ 00:15:45.896 { 00:15:45.896 "dma_device_id": "system", 00:15:45.896 "dma_device_type": 1 00:15:45.896 }, 00:15:45.896 { 00:15:45.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.896 "dma_device_type": 2 00:15:45.896 } 00:15:45.896 ], 00:15:45.896 "driver_specific": {} 00:15:45.896 } 00:15:45.896 ] 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.896 "name": "Existed_Raid", 00:15:45.896 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:45.896 "strip_size_kb": 64, 00:15:45.896 "state": "configuring", 00:15:45.896 "raid_level": "raid5f", 00:15:45.896 "superblock": true, 00:15:45.896 "num_base_bdevs": 3, 00:15:45.896 "num_base_bdevs_discovered": 2, 00:15:45.896 "num_base_bdevs_operational": 3, 00:15:45.896 "base_bdevs_list": [ 00:15:45.896 { 00:15:45.896 "name": "BaseBdev1", 00:15:45.896 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:45.896 "is_configured": true, 00:15:45.896 "data_offset": 2048, 00:15:45.896 "data_size": 63488 00:15:45.896 }, 00:15:45.896 { 00:15:45.896 "name": "BaseBdev2", 00:15:45.896 "uuid": "bed68f63-f170-46cd-8459-3e1770f238e6", 00:15:45.896 "is_configured": true, 00:15:45.896 "data_offset": 2048, 00:15:45.896 "data_size": 63488 00:15:45.896 }, 00:15:45.896 { 00:15:45.896 "name": "BaseBdev3", 00:15:45.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.896 "is_configured": false, 00:15:45.896 "data_offset": 0, 00:15:45.896 "data_size": 0 00:15:45.896 } 00:15:45.896 ] 00:15:45.896 }' 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.896 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.464 [2024-11-04 11:48:11.849144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.464 [2024-11-04 11:48:11.849557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:46.464 [2024-11-04 11:48:11.849624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.464 [2024-11-04 11:48:11.849965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:46.464 BaseBdev3 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.464 [2024-11-04 11:48:11.855843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:46.464 [2024-11-04 11:48:11.855899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:46.464 [2024-11-04 11:48:11.856164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.464 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 [ 00:15:46.465 { 00:15:46.465 "name": "BaseBdev3", 00:15:46.465 "aliases": [ 00:15:46.465 "ed05d553-d713-4bcf-8e27-95b80871aa1b" 00:15:46.465 ], 00:15:46.465 "product_name": "Malloc disk", 00:15:46.465 "block_size": 512, 00:15:46.465 "num_blocks": 65536, 00:15:46.465 "uuid": "ed05d553-d713-4bcf-8e27-95b80871aa1b", 00:15:46.465 "assigned_rate_limits": { 00:15:46.465 "rw_ios_per_sec": 0, 00:15:46.465 "rw_mbytes_per_sec": 0, 00:15:46.465 "r_mbytes_per_sec": 0, 00:15:46.465 "w_mbytes_per_sec": 0 00:15:46.465 }, 00:15:46.465 "claimed": true, 00:15:46.465 "claim_type": "exclusive_write", 00:15:46.465 "zoned": false, 00:15:46.465 "supported_io_types": { 00:15:46.465 "read": true, 00:15:46.465 "write": true, 00:15:46.465 "unmap": true, 00:15:46.465 "flush": true, 00:15:46.465 "reset": true, 00:15:46.465 "nvme_admin": false, 00:15:46.465 "nvme_io": false, 00:15:46.465 "nvme_io_md": false, 00:15:46.465 "write_zeroes": true, 00:15:46.465 "zcopy": true, 00:15:46.465 "get_zone_info": false, 00:15:46.465 "zone_management": false, 00:15:46.465 "zone_append": false, 00:15:46.465 "compare": false, 00:15:46.465 "compare_and_write": false, 00:15:46.465 "abort": true, 00:15:46.465 "seek_hole": false, 00:15:46.465 "seek_data": false, 00:15:46.465 "copy": true, 00:15:46.465 "nvme_iov_md": false 00:15:46.465 }, 00:15:46.465 "memory_domains": [ 00:15:46.465 { 00:15:46.465 "dma_device_id": "system", 00:15:46.465 "dma_device_type": 1 00:15:46.465 }, 00:15:46.465 { 00:15:46.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.465 "dma_device_type": 2 00:15:46.465 } 00:15:46.465 ], 00:15:46.465 "driver_specific": {} 00:15:46.465 } 00:15:46.465 ] 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.465 "name": "Existed_Raid", 00:15:46.465 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:46.465 "strip_size_kb": 64, 00:15:46.465 "state": "online", 00:15:46.465 "raid_level": "raid5f", 00:15:46.465 "superblock": true, 00:15:46.465 "num_base_bdevs": 3, 00:15:46.465 "num_base_bdevs_discovered": 3, 00:15:46.465 "num_base_bdevs_operational": 3, 00:15:46.465 "base_bdevs_list": [ 00:15:46.465 { 00:15:46.465 "name": "BaseBdev1", 00:15:46.465 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:46.465 "is_configured": true, 00:15:46.465 "data_offset": 2048, 00:15:46.465 "data_size": 63488 00:15:46.465 }, 00:15:46.465 { 00:15:46.465 "name": "BaseBdev2", 00:15:46.465 "uuid": "bed68f63-f170-46cd-8459-3e1770f238e6", 00:15:46.465 "is_configured": true, 00:15:46.465 "data_offset": 2048, 00:15:46.465 "data_size": 63488 00:15:46.465 }, 00:15:46.465 { 00:15:46.465 "name": "BaseBdev3", 00:15:46.465 "uuid": "ed05d553-d713-4bcf-8e27-95b80871aa1b", 00:15:46.465 "is_configured": true, 00:15:46.465 "data_offset": 2048, 00:15:46.465 "data_size": 63488 00:15:46.465 } 00:15:46.465 ] 00:15:46.465 }' 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.465 11:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.034 [2024-11-04 11:48:12.354039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.034 "name": "Existed_Raid", 00:15:47.034 "aliases": [ 00:15:47.034 "3069b51f-b12c-4f5c-8168-2b87dbbc0e20" 00:15:47.034 ], 00:15:47.034 "product_name": "Raid Volume", 00:15:47.034 "block_size": 512, 00:15:47.034 "num_blocks": 126976, 00:15:47.034 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:47.034 "assigned_rate_limits": { 00:15:47.034 "rw_ios_per_sec": 0, 00:15:47.034 "rw_mbytes_per_sec": 0, 00:15:47.034 "r_mbytes_per_sec": 0, 00:15:47.034 "w_mbytes_per_sec": 0 00:15:47.034 }, 00:15:47.034 "claimed": false, 00:15:47.034 "zoned": false, 00:15:47.034 "supported_io_types": { 00:15:47.034 "read": true, 00:15:47.034 "write": true, 00:15:47.034 "unmap": false, 00:15:47.034 "flush": false, 00:15:47.034 "reset": true, 00:15:47.034 "nvme_admin": false, 00:15:47.034 "nvme_io": false, 00:15:47.034 "nvme_io_md": false, 00:15:47.034 "write_zeroes": true, 00:15:47.034 "zcopy": false, 00:15:47.034 "get_zone_info": false, 00:15:47.034 "zone_management": false, 00:15:47.034 "zone_append": false, 00:15:47.034 "compare": false, 00:15:47.034 "compare_and_write": false, 00:15:47.034 "abort": false, 00:15:47.034 "seek_hole": false, 00:15:47.034 "seek_data": false, 00:15:47.034 "copy": false, 00:15:47.034 "nvme_iov_md": false 00:15:47.034 }, 00:15:47.034 "driver_specific": { 00:15:47.034 "raid": { 00:15:47.034 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:47.034 "strip_size_kb": 64, 00:15:47.034 "state": "online", 00:15:47.034 "raid_level": "raid5f", 00:15:47.034 "superblock": true, 00:15:47.034 "num_base_bdevs": 3, 00:15:47.034 "num_base_bdevs_discovered": 3, 00:15:47.034 "num_base_bdevs_operational": 3, 00:15:47.034 "base_bdevs_list": [ 00:15:47.034 { 00:15:47.034 "name": "BaseBdev1", 00:15:47.034 "uuid": "b05729ea-b286-4f42-99f2-461914aada96", 00:15:47.034 "is_configured": true, 00:15:47.034 "data_offset": 2048, 00:15:47.034 "data_size": 63488 00:15:47.034 }, 00:15:47.034 { 00:15:47.034 "name": "BaseBdev2", 00:15:47.034 "uuid": "bed68f63-f170-46cd-8459-3e1770f238e6", 00:15:47.034 "is_configured": true, 00:15:47.034 "data_offset": 2048, 00:15:47.034 "data_size": 63488 00:15:47.034 }, 00:15:47.034 { 00:15:47.034 "name": "BaseBdev3", 00:15:47.034 "uuid": "ed05d553-d713-4bcf-8e27-95b80871aa1b", 00:15:47.034 "is_configured": true, 00:15:47.034 "data_offset": 2048, 00:15:47.034 "data_size": 63488 00:15:47.034 } 00:15:47.034 ] 00:15:47.034 } 00:15:47.034 } 00:15:47.034 }' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:47.034 BaseBdev2 00:15:47.034 BaseBdev3' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.034 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 [2024-11-04 11:48:12.633459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.294 "name": "Existed_Raid", 00:15:47.294 "uuid": "3069b51f-b12c-4f5c-8168-2b87dbbc0e20", 00:15:47.294 "strip_size_kb": 64, 00:15:47.294 "state": "online", 00:15:47.294 "raid_level": "raid5f", 00:15:47.294 "superblock": true, 00:15:47.294 "num_base_bdevs": 3, 00:15:47.294 "num_base_bdevs_discovered": 2, 00:15:47.294 "num_base_bdevs_operational": 2, 00:15:47.294 "base_bdevs_list": [ 00:15:47.294 { 00:15:47.294 "name": null, 00:15:47.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.294 "is_configured": false, 00:15:47.294 "data_offset": 0, 00:15:47.294 "data_size": 63488 00:15:47.294 }, 00:15:47.294 { 00:15:47.294 "name": "BaseBdev2", 00:15:47.294 "uuid": "bed68f63-f170-46cd-8459-3e1770f238e6", 00:15:47.294 "is_configured": true, 00:15:47.294 "data_offset": 2048, 00:15:47.294 "data_size": 63488 00:15:47.294 }, 00:15:47.294 { 00:15:47.294 "name": "BaseBdev3", 00:15:47.294 "uuid": "ed05d553-d713-4bcf-8e27-95b80871aa1b", 00:15:47.294 "is_configured": true, 00:15:47.294 "data_offset": 2048, 00:15:47.294 "data_size": 63488 00:15:47.294 } 00:15:47.294 ] 00:15:47.294 }' 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.294 11:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.861 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.862 [2024-11-04 11:48:13.252304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.862 [2024-11-04 11:48:13.252577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.862 [2024-11-04 11:48:13.350253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.862 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.120 [2024-11-04 11:48:13.414240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.120 [2024-11-04 11:48:13.414380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.120 BaseBdev2 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.120 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.378 [ 00:15:48.378 { 00:15:48.378 "name": "BaseBdev2", 00:15:48.378 "aliases": [ 00:15:48.378 "c5e6740f-6b31-4a51-81c8-927fbd2d0df8" 00:15:48.378 ], 00:15:48.378 "product_name": "Malloc disk", 00:15:48.378 "block_size": 512, 00:15:48.378 "num_blocks": 65536, 00:15:48.378 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:48.378 "assigned_rate_limits": { 00:15:48.378 "rw_ios_per_sec": 0, 00:15:48.378 "rw_mbytes_per_sec": 0, 00:15:48.378 "r_mbytes_per_sec": 0, 00:15:48.378 "w_mbytes_per_sec": 0 00:15:48.378 }, 00:15:48.378 "claimed": false, 00:15:48.378 "zoned": false, 00:15:48.378 "supported_io_types": { 00:15:48.378 "read": true, 00:15:48.378 "write": true, 00:15:48.378 "unmap": true, 00:15:48.378 "flush": true, 00:15:48.378 "reset": true, 00:15:48.378 "nvme_admin": false, 00:15:48.378 "nvme_io": false, 00:15:48.378 "nvme_io_md": false, 00:15:48.378 "write_zeroes": true, 00:15:48.378 "zcopy": true, 00:15:48.378 "get_zone_info": false, 00:15:48.378 "zone_management": false, 00:15:48.378 "zone_append": false, 00:15:48.378 "compare": false, 00:15:48.378 "compare_and_write": false, 00:15:48.378 "abort": true, 00:15:48.378 "seek_hole": false, 00:15:48.378 "seek_data": false, 00:15:48.378 "copy": true, 00:15:48.378 "nvme_iov_md": false 00:15:48.378 }, 00:15:48.378 "memory_domains": [ 00:15:48.378 { 00:15:48.378 "dma_device_id": "system", 00:15:48.378 "dma_device_type": 1 00:15:48.378 }, 00:15:48.378 { 00:15:48.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.378 "dma_device_type": 2 00:15:48.378 } 00:15:48.378 ], 00:15:48.378 "driver_specific": {} 00:15:48.378 } 00:15:48.378 ] 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.378 BaseBdev3 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.378 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.378 [ 00:15:48.378 { 00:15:48.378 "name": "BaseBdev3", 00:15:48.378 "aliases": [ 00:15:48.378 "b2847304-958c-459d-900f-ac9a79fc2104" 00:15:48.378 ], 00:15:48.378 "product_name": "Malloc disk", 00:15:48.378 "block_size": 512, 00:15:48.378 "num_blocks": 65536, 00:15:48.378 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:48.378 "assigned_rate_limits": { 00:15:48.378 "rw_ios_per_sec": 0, 00:15:48.378 "rw_mbytes_per_sec": 0, 00:15:48.378 "r_mbytes_per_sec": 0, 00:15:48.378 "w_mbytes_per_sec": 0 00:15:48.378 }, 00:15:48.378 "claimed": false, 00:15:48.378 "zoned": false, 00:15:48.378 "supported_io_types": { 00:15:48.378 "read": true, 00:15:48.378 "write": true, 00:15:48.378 "unmap": true, 00:15:48.378 "flush": true, 00:15:48.378 "reset": true, 00:15:48.378 "nvme_admin": false, 00:15:48.378 "nvme_io": false, 00:15:48.378 "nvme_io_md": false, 00:15:48.378 "write_zeroes": true, 00:15:48.378 "zcopy": true, 00:15:48.378 "get_zone_info": false, 00:15:48.378 "zone_management": false, 00:15:48.378 "zone_append": false, 00:15:48.378 "compare": false, 00:15:48.378 "compare_and_write": false, 00:15:48.378 "abort": true, 00:15:48.378 "seek_hole": false, 00:15:48.378 "seek_data": false, 00:15:48.378 "copy": true, 00:15:48.379 "nvme_iov_md": false 00:15:48.379 }, 00:15:48.379 "memory_domains": [ 00:15:48.379 { 00:15:48.379 "dma_device_id": "system", 00:15:48.379 "dma_device_type": 1 00:15:48.379 }, 00:15:48.379 { 00:15:48.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.379 "dma_device_type": 2 00:15:48.379 } 00:15:48.379 ], 00:15:48.379 "driver_specific": {} 00:15:48.379 } 00:15:48.379 ] 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.379 [2024-11-04 11:48:13.756520] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.379 [2024-11-04 11:48:13.756639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.379 [2024-11-04 11:48:13.756713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.379 [2024-11-04 11:48:13.758931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.379 "name": "Existed_Raid", 00:15:48.379 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:48.379 "strip_size_kb": 64, 00:15:48.379 "state": "configuring", 00:15:48.379 "raid_level": "raid5f", 00:15:48.379 "superblock": true, 00:15:48.379 "num_base_bdevs": 3, 00:15:48.379 "num_base_bdevs_discovered": 2, 00:15:48.379 "num_base_bdevs_operational": 3, 00:15:48.379 "base_bdevs_list": [ 00:15:48.379 { 00:15:48.379 "name": "BaseBdev1", 00:15:48.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.379 "is_configured": false, 00:15:48.379 "data_offset": 0, 00:15:48.379 "data_size": 0 00:15:48.379 }, 00:15:48.379 { 00:15:48.379 "name": "BaseBdev2", 00:15:48.379 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:48.379 "is_configured": true, 00:15:48.379 "data_offset": 2048, 00:15:48.379 "data_size": 63488 00:15:48.379 }, 00:15:48.379 { 00:15:48.379 "name": "BaseBdev3", 00:15:48.379 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:48.379 "is_configured": true, 00:15:48.379 "data_offset": 2048, 00:15:48.379 "data_size": 63488 00:15:48.379 } 00:15:48.379 ] 00:15:48.379 }' 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.379 11:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.945 [2024-11-04 11:48:14.191820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.945 "name": "Existed_Raid", 00:15:48.945 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:48.945 "strip_size_kb": 64, 00:15:48.945 "state": "configuring", 00:15:48.945 "raid_level": "raid5f", 00:15:48.945 "superblock": true, 00:15:48.945 "num_base_bdevs": 3, 00:15:48.945 "num_base_bdevs_discovered": 1, 00:15:48.945 "num_base_bdevs_operational": 3, 00:15:48.945 "base_bdevs_list": [ 00:15:48.945 { 00:15:48.945 "name": "BaseBdev1", 00:15:48.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.945 "is_configured": false, 00:15:48.945 "data_offset": 0, 00:15:48.945 "data_size": 0 00:15:48.945 }, 00:15:48.945 { 00:15:48.945 "name": null, 00:15:48.945 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:48.945 "is_configured": false, 00:15:48.945 "data_offset": 0, 00:15:48.945 "data_size": 63488 00:15:48.945 }, 00:15:48.945 { 00:15:48.945 "name": "BaseBdev3", 00:15:48.945 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:48.945 "is_configured": true, 00:15:48.945 "data_offset": 2048, 00:15:48.945 "data_size": 63488 00:15:48.945 } 00:15:48.945 ] 00:15:48.945 }' 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.945 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.203 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.494 [2024-11-04 11:48:14.746868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.494 BaseBdev1 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.494 [ 00:15:49.494 { 00:15:49.494 "name": "BaseBdev1", 00:15:49.494 "aliases": [ 00:15:49.494 "6485a218-b924-4d06-9b66-75151e610d54" 00:15:49.494 ], 00:15:49.494 "product_name": "Malloc disk", 00:15:49.494 "block_size": 512, 00:15:49.494 "num_blocks": 65536, 00:15:49.494 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:49.494 "assigned_rate_limits": { 00:15:49.494 "rw_ios_per_sec": 0, 00:15:49.494 "rw_mbytes_per_sec": 0, 00:15:49.494 "r_mbytes_per_sec": 0, 00:15:49.494 "w_mbytes_per_sec": 0 00:15:49.494 }, 00:15:49.494 "claimed": true, 00:15:49.494 "claim_type": "exclusive_write", 00:15:49.494 "zoned": false, 00:15:49.494 "supported_io_types": { 00:15:49.494 "read": true, 00:15:49.494 "write": true, 00:15:49.494 "unmap": true, 00:15:49.494 "flush": true, 00:15:49.494 "reset": true, 00:15:49.494 "nvme_admin": false, 00:15:49.494 "nvme_io": false, 00:15:49.494 "nvme_io_md": false, 00:15:49.494 "write_zeroes": true, 00:15:49.494 "zcopy": true, 00:15:49.494 "get_zone_info": false, 00:15:49.494 "zone_management": false, 00:15:49.494 "zone_append": false, 00:15:49.494 "compare": false, 00:15:49.494 "compare_and_write": false, 00:15:49.494 "abort": true, 00:15:49.494 "seek_hole": false, 00:15:49.494 "seek_data": false, 00:15:49.494 "copy": true, 00:15:49.494 "nvme_iov_md": false 00:15:49.494 }, 00:15:49.494 "memory_domains": [ 00:15:49.494 { 00:15:49.494 "dma_device_id": "system", 00:15:49.494 "dma_device_type": 1 00:15:49.494 }, 00:15:49.494 { 00:15:49.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.494 "dma_device_type": 2 00:15:49.494 } 00:15:49.494 ], 00:15:49.494 "driver_specific": {} 00:15:49.494 } 00:15:49.494 ] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.494 "name": "Existed_Raid", 00:15:49.494 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:49.494 "strip_size_kb": 64, 00:15:49.494 "state": "configuring", 00:15:49.494 "raid_level": "raid5f", 00:15:49.494 "superblock": true, 00:15:49.494 "num_base_bdevs": 3, 00:15:49.494 "num_base_bdevs_discovered": 2, 00:15:49.494 "num_base_bdevs_operational": 3, 00:15:49.494 "base_bdevs_list": [ 00:15:49.494 { 00:15:49.494 "name": "BaseBdev1", 00:15:49.494 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:49.494 "is_configured": true, 00:15:49.494 "data_offset": 2048, 00:15:49.494 "data_size": 63488 00:15:49.494 }, 00:15:49.494 { 00:15:49.494 "name": null, 00:15:49.494 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:49.494 "is_configured": false, 00:15:49.494 "data_offset": 0, 00:15:49.494 "data_size": 63488 00:15:49.494 }, 00:15:49.494 { 00:15:49.494 "name": "BaseBdev3", 00:15:49.494 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:49.494 "is_configured": true, 00:15:49.494 "data_offset": 2048, 00:15:49.494 "data_size": 63488 00:15:49.494 } 00:15:49.494 ] 00:15:49.494 }' 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.494 11:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.761 [2024-11-04 11:48:15.226167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.761 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.020 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.020 "name": "Existed_Raid", 00:15:50.020 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:50.020 "strip_size_kb": 64, 00:15:50.020 "state": "configuring", 00:15:50.020 "raid_level": "raid5f", 00:15:50.020 "superblock": true, 00:15:50.020 "num_base_bdevs": 3, 00:15:50.020 "num_base_bdevs_discovered": 1, 00:15:50.020 "num_base_bdevs_operational": 3, 00:15:50.020 "base_bdevs_list": [ 00:15:50.020 { 00:15:50.020 "name": "BaseBdev1", 00:15:50.020 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:50.020 "is_configured": true, 00:15:50.020 "data_offset": 2048, 00:15:50.020 "data_size": 63488 00:15:50.020 }, 00:15:50.020 { 00:15:50.020 "name": null, 00:15:50.020 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:50.020 "is_configured": false, 00:15:50.020 "data_offset": 0, 00:15:50.020 "data_size": 63488 00:15:50.020 }, 00:15:50.020 { 00:15:50.020 "name": null, 00:15:50.020 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:50.020 "is_configured": false, 00:15:50.020 "data_offset": 0, 00:15:50.020 "data_size": 63488 00:15:50.020 } 00:15:50.020 ] 00:15:50.020 }' 00:15:50.020 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.020 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.279 [2024-11-04 11:48:15.709495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.279 "name": "Existed_Raid", 00:15:50.279 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:50.279 "strip_size_kb": 64, 00:15:50.279 "state": "configuring", 00:15:50.279 "raid_level": "raid5f", 00:15:50.279 "superblock": true, 00:15:50.279 "num_base_bdevs": 3, 00:15:50.279 "num_base_bdevs_discovered": 2, 00:15:50.279 "num_base_bdevs_operational": 3, 00:15:50.279 "base_bdevs_list": [ 00:15:50.279 { 00:15:50.279 "name": "BaseBdev1", 00:15:50.279 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:50.279 "is_configured": true, 00:15:50.279 "data_offset": 2048, 00:15:50.279 "data_size": 63488 00:15:50.279 }, 00:15:50.279 { 00:15:50.279 "name": null, 00:15:50.279 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:50.279 "is_configured": false, 00:15:50.279 "data_offset": 0, 00:15:50.279 "data_size": 63488 00:15:50.279 }, 00:15:50.279 { 00:15:50.279 "name": "BaseBdev3", 00:15:50.279 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:50.279 "is_configured": true, 00:15:50.279 "data_offset": 2048, 00:15:50.279 "data_size": 63488 00:15:50.279 } 00:15:50.279 ] 00:15:50.279 }' 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.279 11:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 [2024-11-04 11:48:16.224631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.848 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.106 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.106 "name": "Existed_Raid", 00:15:51.106 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:51.106 "strip_size_kb": 64, 00:15:51.106 "state": "configuring", 00:15:51.106 "raid_level": "raid5f", 00:15:51.106 "superblock": true, 00:15:51.106 "num_base_bdevs": 3, 00:15:51.106 "num_base_bdevs_discovered": 1, 00:15:51.106 "num_base_bdevs_operational": 3, 00:15:51.106 "base_bdevs_list": [ 00:15:51.106 { 00:15:51.106 "name": null, 00:15:51.106 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:51.106 "is_configured": false, 00:15:51.106 "data_offset": 0, 00:15:51.106 "data_size": 63488 00:15:51.106 }, 00:15:51.106 { 00:15:51.106 "name": null, 00:15:51.106 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:51.106 "is_configured": false, 00:15:51.106 "data_offset": 0, 00:15:51.106 "data_size": 63488 00:15:51.106 }, 00:15:51.106 { 00:15:51.106 "name": "BaseBdev3", 00:15:51.106 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:51.106 "is_configured": true, 00:15:51.106 "data_offset": 2048, 00:15:51.106 "data_size": 63488 00:15:51.106 } 00:15:51.106 ] 00:15:51.106 }' 00:15:51.106 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.106 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.365 [2024-11-04 11:48:16.833164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.365 "name": "Existed_Raid", 00:15:51.365 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:51.365 "strip_size_kb": 64, 00:15:51.365 "state": "configuring", 00:15:51.365 "raid_level": "raid5f", 00:15:51.365 "superblock": true, 00:15:51.365 "num_base_bdevs": 3, 00:15:51.365 "num_base_bdevs_discovered": 2, 00:15:51.365 "num_base_bdevs_operational": 3, 00:15:51.365 "base_bdevs_list": [ 00:15:51.365 { 00:15:51.365 "name": null, 00:15:51.365 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:51.365 "is_configured": false, 00:15:51.365 "data_offset": 0, 00:15:51.365 "data_size": 63488 00:15:51.365 }, 00:15:51.365 { 00:15:51.365 "name": "BaseBdev2", 00:15:51.365 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:51.365 "is_configured": true, 00:15:51.365 "data_offset": 2048, 00:15:51.365 "data_size": 63488 00:15:51.365 }, 00:15:51.365 { 00:15:51.365 "name": "BaseBdev3", 00:15:51.365 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:51.365 "is_configured": true, 00:15:51.365 "data_offset": 2048, 00:15:51.365 "data_size": 63488 00:15:51.365 } 00:15:51.365 ] 00:15:51.365 }' 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.365 11:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6485a218-b924-4d06-9b66-75151e610d54 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 [2024-11-04 11:48:17.417387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.934 NewBaseBdev 00:15:51.934 [2024-11-04 11:48:17.417740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.934 [2024-11-04 11:48:17.417764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.934 [2024-11-04 11:48:17.418045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 [2024-11-04 11:48:17.424670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.934 [2024-11-04 11:48:17.424738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:51.934 [2024-11-04 11:48:17.425014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.934 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.934 [ 00:15:51.934 { 00:15:51.934 "name": "NewBaseBdev", 00:15:51.934 "aliases": [ 00:15:51.934 "6485a218-b924-4d06-9b66-75151e610d54" 00:15:51.934 ], 00:15:51.934 "product_name": "Malloc disk", 00:15:51.934 "block_size": 512, 00:15:51.934 "num_blocks": 65536, 00:15:51.934 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:51.934 "assigned_rate_limits": { 00:15:51.934 "rw_ios_per_sec": 0, 00:15:51.934 "rw_mbytes_per_sec": 0, 00:15:51.934 "r_mbytes_per_sec": 0, 00:15:51.934 "w_mbytes_per_sec": 0 00:15:51.934 }, 00:15:51.934 "claimed": true, 00:15:51.934 "claim_type": "exclusive_write", 00:15:51.934 "zoned": false, 00:15:51.934 "supported_io_types": { 00:15:51.934 "read": true, 00:15:51.934 "write": true, 00:15:52.193 "unmap": true, 00:15:52.193 "flush": true, 00:15:52.193 "reset": true, 00:15:52.193 "nvme_admin": false, 00:15:52.193 "nvme_io": false, 00:15:52.193 "nvme_io_md": false, 00:15:52.193 "write_zeroes": true, 00:15:52.193 "zcopy": true, 00:15:52.193 "get_zone_info": false, 00:15:52.193 "zone_management": false, 00:15:52.193 "zone_append": false, 00:15:52.193 "compare": false, 00:15:52.193 "compare_and_write": false, 00:15:52.193 "abort": true, 00:15:52.193 "seek_hole": false, 00:15:52.193 "seek_data": false, 00:15:52.193 "copy": true, 00:15:52.193 "nvme_iov_md": false 00:15:52.193 }, 00:15:52.193 "memory_domains": [ 00:15:52.193 { 00:15:52.193 "dma_device_id": "system", 00:15:52.193 "dma_device_type": 1 00:15:52.193 }, 00:15:52.193 { 00:15:52.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.193 "dma_device_type": 2 00:15:52.193 } 00:15:52.193 ], 00:15:52.193 "driver_specific": {} 00:15:52.193 } 00:15:52.193 ] 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.193 "name": "Existed_Raid", 00:15:52.193 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:52.193 "strip_size_kb": 64, 00:15:52.193 "state": "online", 00:15:52.193 "raid_level": "raid5f", 00:15:52.193 "superblock": true, 00:15:52.193 "num_base_bdevs": 3, 00:15:52.193 "num_base_bdevs_discovered": 3, 00:15:52.193 "num_base_bdevs_operational": 3, 00:15:52.193 "base_bdevs_list": [ 00:15:52.193 { 00:15:52.193 "name": "NewBaseBdev", 00:15:52.193 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:52.193 "is_configured": true, 00:15:52.193 "data_offset": 2048, 00:15:52.193 "data_size": 63488 00:15:52.193 }, 00:15:52.193 { 00:15:52.193 "name": "BaseBdev2", 00:15:52.193 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:52.193 "is_configured": true, 00:15:52.193 "data_offset": 2048, 00:15:52.193 "data_size": 63488 00:15:52.193 }, 00:15:52.193 { 00:15:52.193 "name": "BaseBdev3", 00:15:52.193 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:52.193 "is_configured": true, 00:15:52.193 "data_offset": 2048, 00:15:52.193 "data_size": 63488 00:15:52.193 } 00:15:52.193 ] 00:15:52.193 }' 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.193 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.453 [2024-11-04 11:48:17.915957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.453 "name": "Existed_Raid", 00:15:52.453 "aliases": [ 00:15:52.453 "eeba6aff-3804-4892-9c84-6c008fb565e0" 00:15:52.453 ], 00:15:52.453 "product_name": "Raid Volume", 00:15:52.453 "block_size": 512, 00:15:52.453 "num_blocks": 126976, 00:15:52.453 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:52.453 "assigned_rate_limits": { 00:15:52.453 "rw_ios_per_sec": 0, 00:15:52.453 "rw_mbytes_per_sec": 0, 00:15:52.453 "r_mbytes_per_sec": 0, 00:15:52.453 "w_mbytes_per_sec": 0 00:15:52.453 }, 00:15:52.453 "claimed": false, 00:15:52.453 "zoned": false, 00:15:52.453 "supported_io_types": { 00:15:52.453 "read": true, 00:15:52.453 "write": true, 00:15:52.453 "unmap": false, 00:15:52.453 "flush": false, 00:15:52.453 "reset": true, 00:15:52.453 "nvme_admin": false, 00:15:52.453 "nvme_io": false, 00:15:52.453 "nvme_io_md": false, 00:15:52.453 "write_zeroes": true, 00:15:52.453 "zcopy": false, 00:15:52.453 "get_zone_info": false, 00:15:52.453 "zone_management": false, 00:15:52.453 "zone_append": false, 00:15:52.453 "compare": false, 00:15:52.453 "compare_and_write": false, 00:15:52.453 "abort": false, 00:15:52.453 "seek_hole": false, 00:15:52.453 "seek_data": false, 00:15:52.453 "copy": false, 00:15:52.453 "nvme_iov_md": false 00:15:52.453 }, 00:15:52.453 "driver_specific": { 00:15:52.453 "raid": { 00:15:52.453 "uuid": "eeba6aff-3804-4892-9c84-6c008fb565e0", 00:15:52.453 "strip_size_kb": 64, 00:15:52.453 "state": "online", 00:15:52.453 "raid_level": "raid5f", 00:15:52.453 "superblock": true, 00:15:52.453 "num_base_bdevs": 3, 00:15:52.453 "num_base_bdevs_discovered": 3, 00:15:52.453 "num_base_bdevs_operational": 3, 00:15:52.453 "base_bdevs_list": [ 00:15:52.453 { 00:15:52.453 "name": "NewBaseBdev", 00:15:52.453 "uuid": "6485a218-b924-4d06-9b66-75151e610d54", 00:15:52.453 "is_configured": true, 00:15:52.453 "data_offset": 2048, 00:15:52.453 "data_size": 63488 00:15:52.453 }, 00:15:52.453 { 00:15:52.453 "name": "BaseBdev2", 00:15:52.453 "uuid": "c5e6740f-6b31-4a51-81c8-927fbd2d0df8", 00:15:52.453 "is_configured": true, 00:15:52.453 "data_offset": 2048, 00:15:52.453 "data_size": 63488 00:15:52.453 }, 00:15:52.453 { 00:15:52.453 "name": "BaseBdev3", 00:15:52.453 "uuid": "b2847304-958c-459d-900f-ac9a79fc2104", 00:15:52.453 "is_configured": true, 00:15:52.453 "data_offset": 2048, 00:15:52.453 "data_size": 63488 00:15:52.453 } 00:15:52.453 ] 00:15:52.453 } 00:15:52.453 } 00:15:52.453 }' 00:15:52.453 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.713 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:52.713 BaseBdev2 00:15:52.713 BaseBdev3' 00:15:52.713 11:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 [2024-11-04 11:48:18.199238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.713 [2024-11-04 11:48:18.199314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.714 [2024-11-04 11:48:18.199461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.714 [2024-11-04 11:48:18.199830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.714 [2024-11-04 11:48:18.199891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80752 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80752 ']' 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80752 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:52.714 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80752 00:15:52.973 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:52.973 killing process with pid 80752 00:15:52.973 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:52.973 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80752' 00:15:52.973 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80752 00:15:52.973 [2024-11-04 11:48:18.246344] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.973 11:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80752 00:15:53.231 [2024-11-04 11:48:18.558182] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.167 11:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:54.167 00:15:54.167 real 0m10.814s 00:15:54.167 user 0m17.084s 00:15:54.167 sys 0m1.986s 00:15:54.167 11:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:54.167 11:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.167 ************************************ 00:15:54.167 END TEST raid5f_state_function_test_sb 00:15:54.167 ************************************ 00:15:54.425 11:48:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:54.425 11:48:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:54.425 11:48:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.425 11:48:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.425 ************************************ 00:15:54.425 START TEST raid5f_superblock_test 00:15:54.425 ************************************ 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:54.425 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81377 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81377 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81377 ']' 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:54.426 11:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.426 [2024-11-04 11:48:19.842323] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:15:54.426 [2024-11-04 11:48:19.842561] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81377 ] 00:15:54.683 [2024-11-04 11:48:20.018716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.683 [2024-11-04 11:48:20.140540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.942 [2024-11-04 11:48:20.346023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.942 [2024-11-04 11:48:20.346173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.201 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 malloc1 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 [2024-11-04 11:48:20.764848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.460 [2024-11-04 11:48:20.764961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.460 [2024-11-04 11:48:20.765015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.460 [2024-11-04 11:48:20.765067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.460 [2024-11-04 11:48:20.767193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.460 [2024-11-04 11:48:20.767267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.460 pt1 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 malloc2 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 [2024-11-04 11:48:20.819637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.460 [2024-11-04 11:48:20.819742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.460 [2024-11-04 11:48:20.819795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.460 [2024-11-04 11:48:20.819835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.460 [2024-11-04 11:48:20.821897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.460 [2024-11-04 11:48:20.821986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.460 pt2 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 malloc3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 [2024-11-04 11:48:20.901870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:55.460 [2024-11-04 11:48:20.901966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.460 [2024-11-04 11:48:20.902016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.460 [2024-11-04 11:48:20.902054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.460 [2024-11-04 11:48:20.904305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.460 [2024-11-04 11:48:20.904425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:55.460 pt3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 [2024-11-04 11:48:20.913950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.460 [2024-11-04 11:48:20.915793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.460 [2024-11-04 11:48:20.915901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.460 [2024-11-04 11:48:20.916128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.460 [2024-11-04 11:48:20.916191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.460 [2024-11-04 11:48:20.916496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:55.460 [2024-11-04 11:48:20.922184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.460 [2024-11-04 11:48:20.922242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.460 [2024-11-04 11:48:20.922584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.460 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.460 "name": "raid_bdev1", 00:15:55.460 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:55.460 "strip_size_kb": 64, 00:15:55.460 "state": "online", 00:15:55.460 "raid_level": "raid5f", 00:15:55.461 "superblock": true, 00:15:55.461 "num_base_bdevs": 3, 00:15:55.461 "num_base_bdevs_discovered": 3, 00:15:55.461 "num_base_bdevs_operational": 3, 00:15:55.461 "base_bdevs_list": [ 00:15:55.461 { 00:15:55.461 "name": "pt1", 00:15:55.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.461 "is_configured": true, 00:15:55.461 "data_offset": 2048, 00:15:55.461 "data_size": 63488 00:15:55.461 }, 00:15:55.461 { 00:15:55.461 "name": "pt2", 00:15:55.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.461 "is_configured": true, 00:15:55.461 "data_offset": 2048, 00:15:55.461 "data_size": 63488 00:15:55.461 }, 00:15:55.461 { 00:15:55.461 "name": "pt3", 00:15:55.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.461 "is_configured": true, 00:15:55.461 "data_offset": 2048, 00:15:55.461 "data_size": 63488 00:15:55.461 } 00:15:55.461 ] 00:15:55.461 }' 00:15:55.461 11:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.461 11:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 [2024-11-04 11:48:21.393067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.028 "name": "raid_bdev1", 00:15:56.028 "aliases": [ 00:15:56.028 "b3b4ecfc-876b-4295-ad12-2b5a86743d9b" 00:15:56.028 ], 00:15:56.028 "product_name": "Raid Volume", 00:15:56.028 "block_size": 512, 00:15:56.028 "num_blocks": 126976, 00:15:56.028 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:56.028 "assigned_rate_limits": { 00:15:56.028 "rw_ios_per_sec": 0, 00:15:56.028 "rw_mbytes_per_sec": 0, 00:15:56.028 "r_mbytes_per_sec": 0, 00:15:56.028 "w_mbytes_per_sec": 0 00:15:56.028 }, 00:15:56.028 "claimed": false, 00:15:56.028 "zoned": false, 00:15:56.028 "supported_io_types": { 00:15:56.028 "read": true, 00:15:56.028 "write": true, 00:15:56.028 "unmap": false, 00:15:56.028 "flush": false, 00:15:56.028 "reset": true, 00:15:56.028 "nvme_admin": false, 00:15:56.028 "nvme_io": false, 00:15:56.028 "nvme_io_md": false, 00:15:56.028 "write_zeroes": true, 00:15:56.028 "zcopy": false, 00:15:56.028 "get_zone_info": false, 00:15:56.028 "zone_management": false, 00:15:56.028 "zone_append": false, 00:15:56.028 "compare": false, 00:15:56.028 "compare_and_write": false, 00:15:56.028 "abort": false, 00:15:56.028 "seek_hole": false, 00:15:56.028 "seek_data": false, 00:15:56.028 "copy": false, 00:15:56.028 "nvme_iov_md": false 00:15:56.028 }, 00:15:56.028 "driver_specific": { 00:15:56.028 "raid": { 00:15:56.028 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:56.028 "strip_size_kb": 64, 00:15:56.028 "state": "online", 00:15:56.028 "raid_level": "raid5f", 00:15:56.028 "superblock": true, 00:15:56.028 "num_base_bdevs": 3, 00:15:56.028 "num_base_bdevs_discovered": 3, 00:15:56.028 "num_base_bdevs_operational": 3, 00:15:56.028 "base_bdevs_list": [ 00:15:56.028 { 00:15:56.028 "name": "pt1", 00:15:56.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.028 "is_configured": true, 00:15:56.028 "data_offset": 2048, 00:15:56.028 "data_size": 63488 00:15:56.028 }, 00:15:56.028 { 00:15:56.028 "name": "pt2", 00:15:56.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.028 "is_configured": true, 00:15:56.028 "data_offset": 2048, 00:15:56.028 "data_size": 63488 00:15:56.028 }, 00:15:56.028 { 00:15:56.028 "name": "pt3", 00:15:56.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.028 "is_configured": true, 00:15:56.028 "data_offset": 2048, 00:15:56.028 "data_size": 63488 00:15:56.028 } 00:15:56.028 ] 00:15:56.028 } 00:15:56.028 } 00:15:56.028 }' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.028 pt2 00:15:56.028 pt3' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.028 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.287 [2024-11-04 11:48:21.636646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3b4ecfc-876b-4295-ad12-2b5a86743d9b 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b3b4ecfc-876b-4295-ad12-2b5a86743d9b ']' 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.287 [2024-11-04 11:48:21.668408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.287 [2024-11-04 11:48:21.668491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.287 [2024-11-04 11:48:21.668622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.287 [2024-11-04 11:48:21.668749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.287 [2024-11-04 11:48:21.668801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.287 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:56.288 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:56.547 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.548 [2024-11-04 11:48:21.828243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:56.548 [2024-11-04 11:48:21.830112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:56.548 [2024-11-04 11:48:21.830223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:56.548 [2024-11-04 11:48:21.830295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:56.548 [2024-11-04 11:48:21.830445] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:56.548 [2024-11-04 11:48:21.830545] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:56.548 [2024-11-04 11:48:21.830642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.548 [2024-11-04 11:48:21.830694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:56.548 request: 00:15:56.548 { 00:15:56.548 "name": "raid_bdev1", 00:15:56.548 "raid_level": "raid5f", 00:15:56.548 "base_bdevs": [ 00:15:56.548 "malloc1", 00:15:56.548 "malloc2", 00:15:56.548 "malloc3" 00:15:56.548 ], 00:15:56.548 "strip_size_kb": 64, 00:15:56.548 "superblock": false, 00:15:56.548 "method": "bdev_raid_create", 00:15:56.548 "req_id": 1 00:15:56.548 } 00:15:56.548 Got JSON-RPC error response 00:15:56.548 response: 00:15:56.548 { 00:15:56.548 "code": -17, 00:15:56.548 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:56.548 } 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.548 [2024-11-04 11:48:21.892026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.548 [2024-11-04 11:48:21.892147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.548 [2024-11-04 11:48:21.892202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:56.548 [2024-11-04 11:48:21.892251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.548 [2024-11-04 11:48:21.894739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.548 [2024-11-04 11:48:21.894824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.548 [2024-11-04 11:48:21.894998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:56.548 [2024-11-04 11:48:21.895132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.548 pt1 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.548 "name": "raid_bdev1", 00:15:56.548 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:56.548 "strip_size_kb": 64, 00:15:56.548 "state": "configuring", 00:15:56.548 "raid_level": "raid5f", 00:15:56.548 "superblock": true, 00:15:56.548 "num_base_bdevs": 3, 00:15:56.548 "num_base_bdevs_discovered": 1, 00:15:56.548 "num_base_bdevs_operational": 3, 00:15:56.548 "base_bdevs_list": [ 00:15:56.548 { 00:15:56.548 "name": "pt1", 00:15:56.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.548 "is_configured": true, 00:15:56.548 "data_offset": 2048, 00:15:56.548 "data_size": 63488 00:15:56.548 }, 00:15:56.548 { 00:15:56.548 "name": null, 00:15:56.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.548 "is_configured": false, 00:15:56.548 "data_offset": 2048, 00:15:56.548 "data_size": 63488 00:15:56.548 }, 00:15:56.548 { 00:15:56.548 "name": null, 00:15:56.548 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.548 "is_configured": false, 00:15:56.548 "data_offset": 2048, 00:15:56.548 "data_size": 63488 00:15:56.548 } 00:15:56.548 ] 00:15:56.548 }' 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.548 11:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.117 [2024-11-04 11:48:22.351264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.117 [2024-11-04 11:48:22.351383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.117 [2024-11-04 11:48:22.351437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:57.117 [2024-11-04 11:48:22.351450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.117 [2024-11-04 11:48:22.352018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.117 [2024-11-04 11:48:22.352059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.117 [2024-11-04 11:48:22.352207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.117 [2024-11-04 11:48:22.352244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.117 pt2 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.117 [2024-11-04 11:48:22.363227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.117 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.117 "name": "raid_bdev1", 00:15:57.117 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:57.117 "strip_size_kb": 64, 00:15:57.117 "state": "configuring", 00:15:57.117 "raid_level": "raid5f", 00:15:57.117 "superblock": true, 00:15:57.117 "num_base_bdevs": 3, 00:15:57.117 "num_base_bdevs_discovered": 1, 00:15:57.118 "num_base_bdevs_operational": 3, 00:15:57.118 "base_bdevs_list": [ 00:15:57.118 { 00:15:57.118 "name": "pt1", 00:15:57.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.118 "is_configured": true, 00:15:57.118 "data_offset": 2048, 00:15:57.118 "data_size": 63488 00:15:57.118 }, 00:15:57.118 { 00:15:57.118 "name": null, 00:15:57.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.118 "is_configured": false, 00:15:57.118 "data_offset": 0, 00:15:57.118 "data_size": 63488 00:15:57.118 }, 00:15:57.118 { 00:15:57.118 "name": null, 00:15:57.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.118 "is_configured": false, 00:15:57.118 "data_offset": 2048, 00:15:57.118 "data_size": 63488 00:15:57.118 } 00:15:57.118 ] 00:15:57.118 }' 00:15:57.118 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.118 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.377 [2024-11-04 11:48:22.826516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.377 [2024-11-04 11:48:22.826647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.377 [2024-11-04 11:48:22.826679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:57.377 [2024-11-04 11:48:22.826694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.377 [2024-11-04 11:48:22.827268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.377 [2024-11-04 11:48:22.827306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.377 [2024-11-04 11:48:22.827440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.377 [2024-11-04 11:48:22.827486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.377 pt2 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.377 [2024-11-04 11:48:22.838474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.377 [2024-11-04 11:48:22.838583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.377 [2024-11-04 11:48:22.838628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:57.377 [2024-11-04 11:48:22.838686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.377 [2024-11-04 11:48:22.839167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.377 [2024-11-04 11:48:22.839241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.377 [2024-11-04 11:48:22.839412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:57.377 [2024-11-04 11:48:22.839478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.377 [2024-11-04 11:48:22.839677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.377 [2024-11-04 11:48:22.839718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:57.377 [2024-11-04 11:48:22.839989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.377 [2024-11-04 11:48:22.845495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.377 [2024-11-04 11:48:22.845549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:57.377 [2024-11-04 11:48:22.845814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.377 pt3 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.377 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.378 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.378 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.378 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.637 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.637 "name": "raid_bdev1", 00:15:57.637 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:57.637 "strip_size_kb": 64, 00:15:57.637 "state": "online", 00:15:57.637 "raid_level": "raid5f", 00:15:57.637 "superblock": true, 00:15:57.637 "num_base_bdevs": 3, 00:15:57.637 "num_base_bdevs_discovered": 3, 00:15:57.637 "num_base_bdevs_operational": 3, 00:15:57.637 "base_bdevs_list": [ 00:15:57.637 { 00:15:57.637 "name": "pt1", 00:15:57.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.637 "is_configured": true, 00:15:57.637 "data_offset": 2048, 00:15:57.637 "data_size": 63488 00:15:57.637 }, 00:15:57.637 { 00:15:57.637 "name": "pt2", 00:15:57.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.637 "is_configured": true, 00:15:57.637 "data_offset": 2048, 00:15:57.637 "data_size": 63488 00:15:57.637 }, 00:15:57.637 { 00:15:57.637 "name": "pt3", 00:15:57.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.637 "is_configured": true, 00:15:57.637 "data_offset": 2048, 00:15:57.637 "data_size": 63488 00:15:57.637 } 00:15:57.637 ] 00:15:57.637 }' 00:15:57.637 11:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.637 11:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.897 [2024-11-04 11:48:23.304298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.897 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.897 "name": "raid_bdev1", 00:15:57.897 "aliases": [ 00:15:57.897 "b3b4ecfc-876b-4295-ad12-2b5a86743d9b" 00:15:57.897 ], 00:15:57.897 "product_name": "Raid Volume", 00:15:57.897 "block_size": 512, 00:15:57.897 "num_blocks": 126976, 00:15:57.897 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:57.897 "assigned_rate_limits": { 00:15:57.897 "rw_ios_per_sec": 0, 00:15:57.897 "rw_mbytes_per_sec": 0, 00:15:57.897 "r_mbytes_per_sec": 0, 00:15:57.897 "w_mbytes_per_sec": 0 00:15:57.897 }, 00:15:57.897 "claimed": false, 00:15:57.897 "zoned": false, 00:15:57.897 "supported_io_types": { 00:15:57.897 "read": true, 00:15:57.897 "write": true, 00:15:57.897 "unmap": false, 00:15:57.897 "flush": false, 00:15:57.897 "reset": true, 00:15:57.897 "nvme_admin": false, 00:15:57.897 "nvme_io": false, 00:15:57.897 "nvme_io_md": false, 00:15:57.898 "write_zeroes": true, 00:15:57.898 "zcopy": false, 00:15:57.898 "get_zone_info": false, 00:15:57.898 "zone_management": false, 00:15:57.898 "zone_append": false, 00:15:57.898 "compare": false, 00:15:57.898 "compare_and_write": false, 00:15:57.898 "abort": false, 00:15:57.898 "seek_hole": false, 00:15:57.898 "seek_data": false, 00:15:57.898 "copy": false, 00:15:57.898 "nvme_iov_md": false 00:15:57.898 }, 00:15:57.898 "driver_specific": { 00:15:57.898 "raid": { 00:15:57.898 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:57.898 "strip_size_kb": 64, 00:15:57.898 "state": "online", 00:15:57.898 "raid_level": "raid5f", 00:15:57.898 "superblock": true, 00:15:57.898 "num_base_bdevs": 3, 00:15:57.898 "num_base_bdevs_discovered": 3, 00:15:57.898 "num_base_bdevs_operational": 3, 00:15:57.898 "base_bdevs_list": [ 00:15:57.898 { 00:15:57.898 "name": "pt1", 00:15:57.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 }, 00:15:57.898 { 00:15:57.898 "name": "pt2", 00:15:57.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 }, 00:15:57.898 { 00:15:57.898 "name": "pt3", 00:15:57.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.898 "is_configured": true, 00:15:57.898 "data_offset": 2048, 00:15:57.898 "data_size": 63488 00:15:57.898 } 00:15:57.898 ] 00:15:57.898 } 00:15:57.898 } 00:15:57.898 }' 00:15:57.898 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.898 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:57.898 pt2 00:15:57.898 pt3' 00:15:57.898 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.157 [2024-11-04 11:48:23.571791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b3b4ecfc-876b-4295-ad12-2b5a86743d9b '!=' b3b4ecfc-876b-4295-ad12-2b5a86743d9b ']' 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.157 [2024-11-04 11:48:23.611619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.157 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.158 "name": "raid_bdev1", 00:15:58.158 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:58.158 "strip_size_kb": 64, 00:15:58.158 "state": "online", 00:15:58.158 "raid_level": "raid5f", 00:15:58.158 "superblock": true, 00:15:58.158 "num_base_bdevs": 3, 00:15:58.158 "num_base_bdevs_discovered": 2, 00:15:58.158 "num_base_bdevs_operational": 2, 00:15:58.158 "base_bdevs_list": [ 00:15:58.158 { 00:15:58.158 "name": null, 00:15:58.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.158 "is_configured": false, 00:15:58.158 "data_offset": 0, 00:15:58.158 "data_size": 63488 00:15:58.158 }, 00:15:58.158 { 00:15:58.158 "name": "pt2", 00:15:58.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.158 "is_configured": true, 00:15:58.158 "data_offset": 2048, 00:15:58.158 "data_size": 63488 00:15:58.158 }, 00:15:58.158 { 00:15:58.158 "name": "pt3", 00:15:58.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.158 "is_configured": true, 00:15:58.158 "data_offset": 2048, 00:15:58.158 "data_size": 63488 00:15:58.158 } 00:15:58.158 ] 00:15:58.158 }' 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.158 11:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 [2024-11-04 11:48:24.054792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.725 [2024-11-04 11:48:24.054885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.725 [2024-11-04 11:48:24.054995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.725 [2024-11-04 11:48:24.055112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.725 [2024-11-04 11:48:24.055164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:58.725 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 [2024-11-04 11:48:24.122628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.726 [2024-11-04 11:48:24.122691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.726 [2024-11-04 11:48:24.122713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:58.726 [2024-11-04 11:48:24.122725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.726 [2024-11-04 11:48:24.125027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.726 [2024-11-04 11:48:24.125124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.726 [2024-11-04 11:48:24.125234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.726 [2024-11-04 11:48:24.125302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.726 pt2 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.726 "name": "raid_bdev1", 00:15:58.726 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:58.726 "strip_size_kb": 64, 00:15:58.726 "state": "configuring", 00:15:58.726 "raid_level": "raid5f", 00:15:58.726 "superblock": true, 00:15:58.726 "num_base_bdevs": 3, 00:15:58.726 "num_base_bdevs_discovered": 1, 00:15:58.726 "num_base_bdevs_operational": 2, 00:15:58.726 "base_bdevs_list": [ 00:15:58.726 { 00:15:58.726 "name": null, 00:15:58.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.726 "is_configured": false, 00:15:58.726 "data_offset": 2048, 00:15:58.726 "data_size": 63488 00:15:58.726 }, 00:15:58.726 { 00:15:58.726 "name": "pt2", 00:15:58.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.726 "is_configured": true, 00:15:58.726 "data_offset": 2048, 00:15:58.726 "data_size": 63488 00:15:58.726 }, 00:15:58.726 { 00:15:58.726 "name": null, 00:15:58.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.726 "is_configured": false, 00:15:58.726 "data_offset": 2048, 00:15:58.726 "data_size": 63488 00:15:58.726 } 00:15:58.726 ] 00:15:58.726 }' 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.726 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.296 [2024-11-04 11:48:24.518031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.296 [2024-11-04 11:48:24.518155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.296 [2024-11-04 11:48:24.518207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:59.296 [2024-11-04 11:48:24.518263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.296 [2024-11-04 11:48:24.518867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.296 [2024-11-04 11:48:24.518946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.296 [2024-11-04 11:48:24.519116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:59.296 [2024-11-04 11:48:24.519203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.296 [2024-11-04 11:48:24.519362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.296 [2024-11-04 11:48:24.519418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.296 [2024-11-04 11:48:24.519696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:59.296 [2024-11-04 11:48:24.525032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.296 [2024-11-04 11:48:24.525089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:59.296 [2024-11-04 11:48:24.525510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.296 pt3 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.296 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.297 "name": "raid_bdev1", 00:15:59.297 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:59.297 "strip_size_kb": 64, 00:15:59.297 "state": "online", 00:15:59.297 "raid_level": "raid5f", 00:15:59.297 "superblock": true, 00:15:59.297 "num_base_bdevs": 3, 00:15:59.297 "num_base_bdevs_discovered": 2, 00:15:59.297 "num_base_bdevs_operational": 2, 00:15:59.297 "base_bdevs_list": [ 00:15:59.297 { 00:15:59.297 "name": null, 00:15:59.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.297 "is_configured": false, 00:15:59.297 "data_offset": 2048, 00:15:59.297 "data_size": 63488 00:15:59.297 }, 00:15:59.297 { 00:15:59.297 "name": "pt2", 00:15:59.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.297 "is_configured": true, 00:15:59.297 "data_offset": 2048, 00:15:59.297 "data_size": 63488 00:15:59.297 }, 00:15:59.297 { 00:15:59.297 "name": "pt3", 00:15:59.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.297 "is_configured": true, 00:15:59.297 "data_offset": 2048, 00:15:59.297 "data_size": 63488 00:15:59.297 } 00:15:59.297 ] 00:15:59.297 }' 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.297 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 [2024-11-04 11:48:24.952221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.556 [2024-11-04 11:48:24.952343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.556 [2024-11-04 11:48:24.952495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.556 [2024-11-04 11:48:24.952650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.556 [2024-11-04 11:48:24.952709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:59.556 11:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 [2024-11-04 11:48:25.028267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.556 [2024-11-04 11:48:25.028416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.556 [2024-11-04 11:48:25.028482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:59.556 [2024-11-04 11:48:25.028551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.556 [2024-11-04 11:48:25.031301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.556 [2024-11-04 11:48:25.031390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.556 [2024-11-04 11:48:25.031651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.556 [2024-11-04 11:48:25.031766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.556 [2024-11-04 11:48:25.031972] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:59.556 [2024-11-04 11:48:25.032032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.556 [2024-11-04 11:48:25.032160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:59.556 [2024-11-04 11:48:25.032296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.556 pt1 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.816 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.816 "name": "raid_bdev1", 00:15:59.816 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:15:59.816 "strip_size_kb": 64, 00:15:59.816 "state": "configuring", 00:15:59.816 "raid_level": "raid5f", 00:15:59.816 "superblock": true, 00:15:59.816 "num_base_bdevs": 3, 00:15:59.816 "num_base_bdevs_discovered": 1, 00:15:59.816 "num_base_bdevs_operational": 2, 00:15:59.816 "base_bdevs_list": [ 00:15:59.816 { 00:15:59.816 "name": null, 00:15:59.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.816 "is_configured": false, 00:15:59.816 "data_offset": 2048, 00:15:59.816 "data_size": 63488 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "name": "pt2", 00:15:59.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.816 "is_configured": true, 00:15:59.816 "data_offset": 2048, 00:15:59.816 "data_size": 63488 00:15:59.816 }, 00:15:59.816 { 00:15:59.816 "name": null, 00:15:59.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.816 "is_configured": false, 00:15:59.816 "data_offset": 2048, 00:15:59.816 "data_size": 63488 00:15:59.816 } 00:15:59.816 ] 00:15:59.816 }' 00:15:59.816 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.816 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.076 [2024-11-04 11:48:25.500255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.076 [2024-11-04 11:48:25.500432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.076 [2024-11-04 11:48:25.500534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:00.076 [2024-11-04 11:48:25.500605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.076 [2024-11-04 11:48:25.501366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.076 [2024-11-04 11:48:25.501482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.076 [2024-11-04 11:48:25.501704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.076 [2024-11-04 11:48:25.501796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.076 [2024-11-04 11:48:25.502002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:00.076 [2024-11-04 11:48:25.502050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:00.076 [2024-11-04 11:48:25.502392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:00.076 [2024-11-04 11:48:25.508997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:00.076 [2024-11-04 11:48:25.509082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:00.076 pt3 00:16:00.076 [2024-11-04 11:48:25.509478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.076 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.077 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.077 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.077 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.077 "name": "raid_bdev1", 00:16:00.077 "uuid": "b3b4ecfc-876b-4295-ad12-2b5a86743d9b", 00:16:00.077 "strip_size_kb": 64, 00:16:00.077 "state": "online", 00:16:00.077 "raid_level": "raid5f", 00:16:00.077 "superblock": true, 00:16:00.077 "num_base_bdevs": 3, 00:16:00.077 "num_base_bdevs_discovered": 2, 00:16:00.077 "num_base_bdevs_operational": 2, 00:16:00.077 "base_bdevs_list": [ 00:16:00.077 { 00:16:00.077 "name": null, 00:16:00.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.077 "is_configured": false, 00:16:00.077 "data_offset": 2048, 00:16:00.077 "data_size": 63488 00:16:00.077 }, 00:16:00.077 { 00:16:00.077 "name": "pt2", 00:16:00.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.077 "is_configured": true, 00:16:00.077 "data_offset": 2048, 00:16:00.077 "data_size": 63488 00:16:00.077 }, 00:16:00.077 { 00:16:00.077 "name": "pt3", 00:16:00.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.077 "is_configured": true, 00:16:00.077 "data_offset": 2048, 00:16:00.077 "data_size": 63488 00:16:00.077 } 00:16:00.077 ] 00:16:00.077 }' 00:16:00.077 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.077 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.646 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:00.646 11:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:00.646 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.646 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.646 11:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.646 [2024-11-04 11:48:26.021236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b3b4ecfc-876b-4295-ad12-2b5a86743d9b '!=' b3b4ecfc-876b-4295-ad12-2b5a86743d9b ']' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81377 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81377 ']' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81377 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81377 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:00.646 killing process with pid 81377 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81377' 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81377 00:16:00.646 [2024-11-04 11:48:26.093775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.646 [2024-11-04 11:48:26.093894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.646 11:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81377 00:16:00.646 [2024-11-04 11:48:26.093967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.646 [2024-11-04 11:48:26.093981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:00.905 [2024-11-04 11:48:26.414144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.285 11:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.285 00:16:02.285 real 0m7.812s 00:16:02.285 user 0m12.135s 00:16:02.285 sys 0m1.386s 00:16:02.285 11:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.285 11:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.285 ************************************ 00:16:02.285 END TEST raid5f_superblock_test 00:16:02.285 ************************************ 00:16:02.285 11:48:27 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:02.285 11:48:27 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:02.285 11:48:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:02.285 11:48:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:02.285 11:48:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.285 ************************************ 00:16:02.285 START TEST raid5f_rebuild_test 00:16:02.285 ************************************ 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81815 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81815 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81815 ']' 00:16:02.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.285 11:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.285 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:02.285 Zero copy mechanism will not be used. 00:16:02.285 [2024-11-04 11:48:27.732936] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:16:02.285 [2024-11-04 11:48:27.733061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81815 ] 00:16:02.544 [2024-11-04 11:48:27.885519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.544 [2024-11-04 11:48:28.002360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.824 [2024-11-04 11:48:28.208775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.824 [2024-11-04 11:48:28.208837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.083 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.341 BaseBdev1_malloc 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.341 [2024-11-04 11:48:28.625978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.341 [2024-11-04 11:48:28.626140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.341 [2024-11-04 11:48:28.626208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.341 [2024-11-04 11:48:28.626297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.341 [2024-11-04 11:48:28.628714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.341 [2024-11-04 11:48:28.628810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.341 BaseBdev1 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.341 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 BaseBdev2_malloc 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 [2024-11-04 11:48:28.681758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.342 [2024-11-04 11:48:28.681888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.342 [2024-11-04 11:48:28.681958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.342 [2024-11-04 11:48:28.682043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.342 [2024-11-04 11:48:28.684537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.342 [2024-11-04 11:48:28.684628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.342 BaseBdev2 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 BaseBdev3_malloc 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 [2024-11-04 11:48:28.757885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:03.342 [2024-11-04 11:48:28.758048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.342 [2024-11-04 11:48:28.758129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.342 [2024-11-04 11:48:28.758197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.342 [2024-11-04 11:48:28.760623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.342 [2024-11-04 11:48:28.760711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.342 BaseBdev3 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 spare_malloc 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 spare_delay 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 [2024-11-04 11:48:28.826872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:03.342 [2024-11-04 11:48:28.826929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.342 [2024-11-04 11:48:28.826955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:03.342 [2024-11-04 11:48:28.826968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.342 [2024-11-04 11:48:28.829377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.342 [2024-11-04 11:48:28.829440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:03.342 spare 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.342 [2024-11-04 11:48:28.838913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.342 [2024-11-04 11:48:28.840875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.342 [2024-11-04 11:48:28.840956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.342 [2024-11-04 11:48:28.841041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:03.342 [2024-11-04 11:48:28.841053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:03.342 [2024-11-04 11:48:28.841317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:03.342 [2024-11-04 11:48:28.847497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:03.342 [2024-11-04 11:48:28.847578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:03.342 [2024-11-04 11:48:28.847949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.342 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.602 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.602 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.602 "name": "raid_bdev1", 00:16:03.602 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:03.602 "strip_size_kb": 64, 00:16:03.602 "state": "online", 00:16:03.602 "raid_level": "raid5f", 00:16:03.602 "superblock": false, 00:16:03.602 "num_base_bdevs": 3, 00:16:03.602 "num_base_bdevs_discovered": 3, 00:16:03.602 "num_base_bdevs_operational": 3, 00:16:03.602 "base_bdevs_list": [ 00:16:03.602 { 00:16:03.602 "name": "BaseBdev1", 00:16:03.602 "uuid": "ecb29ca0-5faf-522e-8698-a8ba59b6811b", 00:16:03.602 "is_configured": true, 00:16:03.602 "data_offset": 0, 00:16:03.602 "data_size": 65536 00:16:03.602 }, 00:16:03.602 { 00:16:03.602 "name": "BaseBdev2", 00:16:03.602 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:03.602 "is_configured": true, 00:16:03.602 "data_offset": 0, 00:16:03.602 "data_size": 65536 00:16:03.602 }, 00:16:03.602 { 00:16:03.602 "name": "BaseBdev3", 00:16:03.602 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:03.602 "is_configured": true, 00:16:03.602 "data_offset": 0, 00:16:03.602 "data_size": 65536 00:16:03.602 } 00:16:03.602 ] 00:16:03.602 }' 00:16:03.602 11:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.602 11:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.863 [2024-11-04 11:48:29.298821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.863 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.123 [2024-11-04 11:48:29.558253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:04.123 /dev/nbd0 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.123 1+0 records in 00:16:04.123 1+0 records out 00:16:04.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455926 s, 9.0 MB/s 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:04.123 11:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:04.691 512+0 records in 00:16:04.691 512+0 records out 00:16:04.691 67108864 bytes (67 MB, 64 MiB) copied, 0.385421 s, 174 MB/s 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.691 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.951 [2024-11-04 11:48:30.230445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 [2024-11-04 11:48:30.251344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.951 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.951 "name": "raid_bdev1", 00:16:04.951 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:04.951 "strip_size_kb": 64, 00:16:04.951 "state": "online", 00:16:04.951 "raid_level": "raid5f", 00:16:04.951 "superblock": false, 00:16:04.951 "num_base_bdevs": 3, 00:16:04.951 "num_base_bdevs_discovered": 2, 00:16:04.951 "num_base_bdevs_operational": 2, 00:16:04.951 "base_bdevs_list": [ 00:16:04.951 { 00:16:04.951 "name": null, 00:16:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.951 "is_configured": false, 00:16:04.951 "data_offset": 0, 00:16:04.952 "data_size": 65536 00:16:04.952 }, 00:16:04.952 { 00:16:04.952 "name": "BaseBdev2", 00:16:04.952 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:04.952 "is_configured": true, 00:16:04.952 "data_offset": 0, 00:16:04.952 "data_size": 65536 00:16:04.952 }, 00:16:04.952 { 00:16:04.952 "name": "BaseBdev3", 00:16:04.952 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:04.952 "is_configured": true, 00:16:04.952 "data_offset": 0, 00:16:04.952 "data_size": 65536 00:16:04.952 } 00:16:04.952 ] 00:16:04.952 }' 00:16:04.952 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.952 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.211 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.211 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.211 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.211 [2024-11-04 11:48:30.678628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.211 [2024-11-04 11:48:30.698027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:05.211 11:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.211 11:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.211 [2024-11-04 11:48:30.707673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.592 "name": "raid_bdev1", 00:16:06.592 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:06.592 "strip_size_kb": 64, 00:16:06.592 "state": "online", 00:16:06.592 "raid_level": "raid5f", 00:16:06.592 "superblock": false, 00:16:06.592 "num_base_bdevs": 3, 00:16:06.592 "num_base_bdevs_discovered": 3, 00:16:06.592 "num_base_bdevs_operational": 3, 00:16:06.592 "process": { 00:16:06.592 "type": "rebuild", 00:16:06.592 "target": "spare", 00:16:06.592 "progress": { 00:16:06.592 "blocks": 18432, 00:16:06.592 "percent": 14 00:16:06.592 } 00:16:06.592 }, 00:16:06.592 "base_bdevs_list": [ 00:16:06.592 { 00:16:06.592 "name": "spare", 00:16:06.592 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:06.592 "is_configured": true, 00:16:06.592 "data_offset": 0, 00:16:06.592 "data_size": 65536 00:16:06.592 }, 00:16:06.592 { 00:16:06.592 "name": "BaseBdev2", 00:16:06.592 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:06.592 "is_configured": true, 00:16:06.592 "data_offset": 0, 00:16:06.592 "data_size": 65536 00:16:06.592 }, 00:16:06.592 { 00:16:06.592 "name": "BaseBdev3", 00:16:06.592 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:06.592 "is_configured": true, 00:16:06.592 "data_offset": 0, 00:16:06.592 "data_size": 65536 00:16:06.592 } 00:16:06.592 ] 00:16:06.592 }' 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 [2024-11-04 11:48:31.848396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.592 [2024-11-04 11:48:31.918659] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:06.592 [2024-11-04 11:48:31.919213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.592 [2024-11-04 11:48:31.919249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:06.592 [2024-11-04 11:48:31.919262] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.592 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.593 11:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.593 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.593 "name": "raid_bdev1", 00:16:06.593 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:06.593 "strip_size_kb": 64, 00:16:06.593 "state": "online", 00:16:06.593 "raid_level": "raid5f", 00:16:06.593 "superblock": false, 00:16:06.593 "num_base_bdevs": 3, 00:16:06.593 "num_base_bdevs_discovered": 2, 00:16:06.593 "num_base_bdevs_operational": 2, 00:16:06.593 "base_bdevs_list": [ 00:16:06.593 { 00:16:06.593 "name": null, 00:16:06.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.593 "is_configured": false, 00:16:06.593 "data_offset": 0, 00:16:06.593 "data_size": 65536 00:16:06.593 }, 00:16:06.593 { 00:16:06.593 "name": "BaseBdev2", 00:16:06.593 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:06.593 "is_configured": true, 00:16:06.593 "data_offset": 0, 00:16:06.593 "data_size": 65536 00:16:06.593 }, 00:16:06.593 { 00:16:06.593 "name": "BaseBdev3", 00:16:06.593 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:06.593 "is_configured": true, 00:16:06.593 "data_offset": 0, 00:16:06.593 "data_size": 65536 00:16:06.593 } 00:16:06.593 ] 00:16:06.593 }' 00:16:06.593 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.593 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.886 "name": "raid_bdev1", 00:16:06.886 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:06.886 "strip_size_kb": 64, 00:16:06.886 "state": "online", 00:16:06.886 "raid_level": "raid5f", 00:16:06.886 "superblock": false, 00:16:06.886 "num_base_bdevs": 3, 00:16:06.886 "num_base_bdevs_discovered": 2, 00:16:06.886 "num_base_bdevs_operational": 2, 00:16:06.886 "base_bdevs_list": [ 00:16:06.886 { 00:16:06.886 "name": null, 00:16:06.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.886 "is_configured": false, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 }, 00:16:06.886 { 00:16:06.886 "name": "BaseBdev2", 00:16:06.886 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 }, 00:16:06.886 { 00:16:06.886 "name": "BaseBdev3", 00:16:06.886 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:06.886 "is_configured": true, 00:16:06.886 "data_offset": 0, 00:16:06.886 "data_size": 65536 00:16:06.886 } 00:16:06.886 ] 00:16:06.886 }' 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.886 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.145 [2024-11-04 11:48:32.442119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.145 [2024-11-04 11:48:32.459360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.145 11:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.145 [2024-11-04 11:48:32.467565] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.083 "name": "raid_bdev1", 00:16:08.083 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:08.083 "strip_size_kb": 64, 00:16:08.083 "state": "online", 00:16:08.083 "raid_level": "raid5f", 00:16:08.083 "superblock": false, 00:16:08.083 "num_base_bdevs": 3, 00:16:08.083 "num_base_bdevs_discovered": 3, 00:16:08.083 "num_base_bdevs_operational": 3, 00:16:08.083 "process": { 00:16:08.083 "type": "rebuild", 00:16:08.083 "target": "spare", 00:16:08.083 "progress": { 00:16:08.083 "blocks": 18432, 00:16:08.083 "percent": 14 00:16:08.083 } 00:16:08.083 }, 00:16:08.083 "base_bdevs_list": [ 00:16:08.083 { 00:16:08.083 "name": "spare", 00:16:08.083 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 }, 00:16:08.083 { 00:16:08.083 "name": "BaseBdev2", 00:16:08.083 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 }, 00:16:08.083 { 00:16:08.083 "name": "BaseBdev3", 00:16:08.083 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:08.083 "is_configured": true, 00:16:08.083 "data_offset": 0, 00:16:08.083 "data_size": 65536 00:16:08.083 } 00:16:08.083 ] 00:16:08.083 }' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.083 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.341 11:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.341 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.341 "name": "raid_bdev1", 00:16:08.341 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:08.341 "strip_size_kb": 64, 00:16:08.341 "state": "online", 00:16:08.341 "raid_level": "raid5f", 00:16:08.341 "superblock": false, 00:16:08.341 "num_base_bdevs": 3, 00:16:08.341 "num_base_bdevs_discovered": 3, 00:16:08.341 "num_base_bdevs_operational": 3, 00:16:08.341 "process": { 00:16:08.341 "type": "rebuild", 00:16:08.341 "target": "spare", 00:16:08.341 "progress": { 00:16:08.341 "blocks": 22528, 00:16:08.341 "percent": 17 00:16:08.341 } 00:16:08.341 }, 00:16:08.341 "base_bdevs_list": [ 00:16:08.341 { 00:16:08.341 "name": "spare", 00:16:08.341 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:08.341 "is_configured": true, 00:16:08.342 "data_offset": 0, 00:16:08.342 "data_size": 65536 00:16:08.342 }, 00:16:08.342 { 00:16:08.342 "name": "BaseBdev2", 00:16:08.342 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:08.342 "is_configured": true, 00:16:08.342 "data_offset": 0, 00:16:08.342 "data_size": 65536 00:16:08.342 }, 00:16:08.342 { 00:16:08.342 "name": "BaseBdev3", 00:16:08.342 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:08.342 "is_configured": true, 00:16:08.342 "data_offset": 0, 00:16:08.342 "data_size": 65536 00:16:08.342 } 00:16:08.342 ] 00:16:08.342 }' 00:16:08.342 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.342 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.342 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.342 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.342 11:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.277 "name": "raid_bdev1", 00:16:09.277 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:09.277 "strip_size_kb": 64, 00:16:09.277 "state": "online", 00:16:09.277 "raid_level": "raid5f", 00:16:09.277 "superblock": false, 00:16:09.277 "num_base_bdevs": 3, 00:16:09.277 "num_base_bdevs_discovered": 3, 00:16:09.277 "num_base_bdevs_operational": 3, 00:16:09.277 "process": { 00:16:09.277 "type": "rebuild", 00:16:09.277 "target": "spare", 00:16:09.277 "progress": { 00:16:09.277 "blocks": 45056, 00:16:09.277 "percent": 34 00:16:09.277 } 00:16:09.277 }, 00:16:09.277 "base_bdevs_list": [ 00:16:09.277 { 00:16:09.277 "name": "spare", 00:16:09.277 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:09.277 "is_configured": true, 00:16:09.277 "data_offset": 0, 00:16:09.277 "data_size": 65536 00:16:09.277 }, 00:16:09.277 { 00:16:09.277 "name": "BaseBdev2", 00:16:09.277 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:09.277 "is_configured": true, 00:16:09.277 "data_offset": 0, 00:16:09.277 "data_size": 65536 00:16:09.277 }, 00:16:09.277 { 00:16:09.277 "name": "BaseBdev3", 00:16:09.277 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:09.277 "is_configured": true, 00:16:09.277 "data_offset": 0, 00:16:09.277 "data_size": 65536 00:16:09.277 } 00:16:09.277 ] 00:16:09.277 }' 00:16:09.277 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.537 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.537 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.537 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.537 11:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.475 "name": "raid_bdev1", 00:16:10.475 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:10.475 "strip_size_kb": 64, 00:16:10.475 "state": "online", 00:16:10.475 "raid_level": "raid5f", 00:16:10.475 "superblock": false, 00:16:10.475 "num_base_bdevs": 3, 00:16:10.475 "num_base_bdevs_discovered": 3, 00:16:10.475 "num_base_bdevs_operational": 3, 00:16:10.475 "process": { 00:16:10.475 "type": "rebuild", 00:16:10.475 "target": "spare", 00:16:10.475 "progress": { 00:16:10.475 "blocks": 67584, 00:16:10.475 "percent": 51 00:16:10.475 } 00:16:10.475 }, 00:16:10.475 "base_bdevs_list": [ 00:16:10.475 { 00:16:10.475 "name": "spare", 00:16:10.475 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:10.475 "is_configured": true, 00:16:10.475 "data_offset": 0, 00:16:10.475 "data_size": 65536 00:16:10.475 }, 00:16:10.475 { 00:16:10.475 "name": "BaseBdev2", 00:16:10.475 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:10.475 "is_configured": true, 00:16:10.475 "data_offset": 0, 00:16:10.475 "data_size": 65536 00:16:10.475 }, 00:16:10.475 { 00:16:10.475 "name": "BaseBdev3", 00:16:10.475 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:10.475 "is_configured": true, 00:16:10.475 "data_offset": 0, 00:16:10.475 "data_size": 65536 00:16:10.475 } 00:16:10.475 ] 00:16:10.475 }' 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.475 11:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.734 11:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.734 11:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.674 "name": "raid_bdev1", 00:16:11.674 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:11.674 "strip_size_kb": 64, 00:16:11.674 "state": "online", 00:16:11.674 "raid_level": "raid5f", 00:16:11.674 "superblock": false, 00:16:11.674 "num_base_bdevs": 3, 00:16:11.674 "num_base_bdevs_discovered": 3, 00:16:11.674 "num_base_bdevs_operational": 3, 00:16:11.674 "process": { 00:16:11.674 "type": "rebuild", 00:16:11.674 "target": "spare", 00:16:11.674 "progress": { 00:16:11.674 "blocks": 92160, 00:16:11.674 "percent": 70 00:16:11.674 } 00:16:11.674 }, 00:16:11.674 "base_bdevs_list": [ 00:16:11.674 { 00:16:11.674 "name": "spare", 00:16:11.674 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:11.674 "is_configured": true, 00:16:11.674 "data_offset": 0, 00:16:11.674 "data_size": 65536 00:16:11.674 }, 00:16:11.674 { 00:16:11.674 "name": "BaseBdev2", 00:16:11.674 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:11.674 "is_configured": true, 00:16:11.674 "data_offset": 0, 00:16:11.674 "data_size": 65536 00:16:11.674 }, 00:16:11.674 { 00:16:11.674 "name": "BaseBdev3", 00:16:11.674 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:11.674 "is_configured": true, 00:16:11.674 "data_offset": 0, 00:16:11.674 "data_size": 65536 00:16:11.674 } 00:16:11.674 ] 00:16:11.674 }' 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.674 11:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.056 "name": "raid_bdev1", 00:16:13.056 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:13.056 "strip_size_kb": 64, 00:16:13.056 "state": "online", 00:16:13.056 "raid_level": "raid5f", 00:16:13.056 "superblock": false, 00:16:13.056 "num_base_bdevs": 3, 00:16:13.056 "num_base_bdevs_discovered": 3, 00:16:13.056 "num_base_bdevs_operational": 3, 00:16:13.056 "process": { 00:16:13.056 "type": "rebuild", 00:16:13.056 "target": "spare", 00:16:13.056 "progress": { 00:16:13.056 "blocks": 114688, 00:16:13.056 "percent": 87 00:16:13.056 } 00:16:13.056 }, 00:16:13.056 "base_bdevs_list": [ 00:16:13.056 { 00:16:13.056 "name": "spare", 00:16:13.056 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:13.056 "is_configured": true, 00:16:13.056 "data_offset": 0, 00:16:13.056 "data_size": 65536 00:16:13.056 }, 00:16:13.056 { 00:16:13.056 "name": "BaseBdev2", 00:16:13.056 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:13.056 "is_configured": true, 00:16:13.056 "data_offset": 0, 00:16:13.056 "data_size": 65536 00:16:13.056 }, 00:16:13.056 { 00:16:13.056 "name": "BaseBdev3", 00:16:13.056 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:13.056 "is_configured": true, 00:16:13.056 "data_offset": 0, 00:16:13.056 "data_size": 65536 00:16:13.056 } 00:16:13.056 ] 00:16:13.056 }' 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.056 11:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.624 [2024-11-04 11:48:38.925217] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:13.624 [2024-11-04 11:48:38.925407] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:13.624 [2024-11-04 11:48:38.925496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.883 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.883 "name": "raid_bdev1", 00:16:13.883 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:13.883 "strip_size_kb": 64, 00:16:13.883 "state": "online", 00:16:13.883 "raid_level": "raid5f", 00:16:13.883 "superblock": false, 00:16:13.883 "num_base_bdevs": 3, 00:16:13.883 "num_base_bdevs_discovered": 3, 00:16:13.883 "num_base_bdevs_operational": 3, 00:16:13.883 "base_bdevs_list": [ 00:16:13.883 { 00:16:13.883 "name": "spare", 00:16:13.883 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:13.883 "is_configured": true, 00:16:13.883 "data_offset": 0, 00:16:13.883 "data_size": 65536 00:16:13.884 }, 00:16:13.884 { 00:16:13.884 "name": "BaseBdev2", 00:16:13.884 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:13.884 "is_configured": true, 00:16:13.884 "data_offset": 0, 00:16:13.884 "data_size": 65536 00:16:13.884 }, 00:16:13.884 { 00:16:13.884 "name": "BaseBdev3", 00:16:13.884 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:13.884 "is_configured": true, 00:16:13.884 "data_offset": 0, 00:16:13.884 "data_size": 65536 00:16:13.884 } 00:16:13.884 ] 00:16:13.884 }' 00:16:13.884 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.884 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:13.884 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.143 "name": "raid_bdev1", 00:16:14.143 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:14.143 "strip_size_kb": 64, 00:16:14.143 "state": "online", 00:16:14.143 "raid_level": "raid5f", 00:16:14.143 "superblock": false, 00:16:14.143 "num_base_bdevs": 3, 00:16:14.143 "num_base_bdevs_discovered": 3, 00:16:14.143 "num_base_bdevs_operational": 3, 00:16:14.143 "base_bdevs_list": [ 00:16:14.143 { 00:16:14.143 "name": "spare", 00:16:14.143 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 }, 00:16:14.143 { 00:16:14.143 "name": "BaseBdev2", 00:16:14.143 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 }, 00:16:14.143 { 00:16:14.143 "name": "BaseBdev3", 00:16:14.143 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 } 00:16:14.143 ] 00:16:14.143 }' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.143 "name": "raid_bdev1", 00:16:14.143 "uuid": "3910ba6d-69e1-4ca1-bdff-8ffec7d16bbb", 00:16:14.143 "strip_size_kb": 64, 00:16:14.143 "state": "online", 00:16:14.143 "raid_level": "raid5f", 00:16:14.143 "superblock": false, 00:16:14.143 "num_base_bdevs": 3, 00:16:14.143 "num_base_bdevs_discovered": 3, 00:16:14.143 "num_base_bdevs_operational": 3, 00:16:14.143 "base_bdevs_list": [ 00:16:14.143 { 00:16:14.143 "name": "spare", 00:16:14.143 "uuid": "d790a1f8-c22c-558c-898a-ec81602eb468", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 }, 00:16:14.143 { 00:16:14.143 "name": "BaseBdev2", 00:16:14.143 "uuid": "fbefddd9-f312-5716-abd6-d020c8508eb4", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 }, 00:16:14.143 { 00:16:14.143 "name": "BaseBdev3", 00:16:14.143 "uuid": "255b5413-81b0-584d-b18f-629cac0bdee6", 00:16:14.143 "is_configured": true, 00:16:14.143 "data_offset": 0, 00:16:14.143 "data_size": 65536 00:16:14.143 } 00:16:14.143 ] 00:16:14.143 }' 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.143 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.711 [2024-11-04 11:48:39.977395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.711 [2024-11-04 11:48:39.977492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.711 [2024-11-04 11:48:39.977635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.711 [2024-11-04 11:48:39.977799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.711 [2024-11-04 11:48:39.977883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.711 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.712 11:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.712 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:14.971 /dev/nbd0 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:14.971 1+0 records in 00:16:14.971 1+0 records out 00:16:14.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420448 s, 9.7 MB/s 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:14.971 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:15.231 /dev/nbd1 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.231 1+0 records in 00:16:15.231 1+0 records out 00:16:15.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424555 s, 9.6 MB/s 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.231 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.498 11:48:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:15.775 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:15.775 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81815 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81815 ']' 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81815 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81815 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:15.776 killing process with pid 81815 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81815' 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81815 00:16:15.776 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.776 00:16:15.776 Latency(us) 00:16:15.776 [2024-11-04T11:48:41.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.776 [2024-11-04T11:48:41.298Z] =================================================================================================================== 00:16:15.776 [2024-11-04T11:48:41.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.776 [2024-11-04 11:48:41.264060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.776 11:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81815 00:16:16.343 [2024-11-04 11:48:41.665276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.280 11:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.281 00:16:17.281 real 0m15.144s 00:16:17.281 user 0m18.483s 00:16:17.281 sys 0m1.990s 00:16:17.281 11:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:17.281 ************************************ 00:16:17.281 END TEST raid5f_rebuild_test 00:16:17.281 ************************************ 00:16:17.281 11:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.540 11:48:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:17.540 11:48:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:17.540 11:48:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:17.540 11:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.540 ************************************ 00:16:17.540 START TEST raid5f_rebuild_test_sb 00:16:17.540 ************************************ 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:17.540 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82257 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82257 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82257 ']' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.541 11:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.541 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:17.541 Zero copy mechanism will not be used. 00:16:17.541 [2024-11-04 11:48:42.936118] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:16:17.541 [2024-11-04 11:48:42.936237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:16:17.800 [2024-11-04 11:48:43.111459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.800 [2024-11-04 11:48:43.227099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.059 [2024-11-04 11:48:43.431787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.059 [2024-11-04 11:48:43.431820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.317 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.317 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:18.317 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.318 BaseBdev1_malloc 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.318 [2024-11-04 11:48:43.818140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:18.318 [2024-11-04 11:48:43.818225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.318 [2024-11-04 11:48:43.818250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:18.318 [2024-11-04 11:48:43.818261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.318 [2024-11-04 11:48:43.820354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.318 [2024-11-04 11:48:43.820408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.318 BaseBdev1 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.318 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 BaseBdev2_malloc 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 [2024-11-04 11:48:43.873823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:18.577 [2024-11-04 11:48:43.873890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.577 [2024-11-04 11:48:43.873911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.577 [2024-11-04 11:48:43.873924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.577 [2024-11-04 11:48:43.875972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.577 [2024-11-04 11:48:43.876011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.577 BaseBdev2 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 BaseBdev3_malloc 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 [2024-11-04 11:48:43.940687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:18.577 [2024-11-04 11:48:43.940745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.577 [2024-11-04 11:48:43.940764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:18.577 [2024-11-04 11:48:43.940775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.577 [2024-11-04 11:48:43.942783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.577 [2024-11-04 11:48:43.942821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.577 BaseBdev3 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 spare_malloc 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 spare_delay 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:18.577 11:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 [2024-11-04 11:48:44.006669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:18.577 [2024-11-04 11:48:44.006724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.577 [2024-11-04 11:48:44.006741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:18.577 [2024-11-04 11:48:44.006751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.577 [2024-11-04 11:48:44.008892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.577 [2024-11-04 11:48:44.008940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:18.577 spare 00:16:18.577 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.577 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:18.577 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.577 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.577 [2024-11-04 11:48:44.018736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.578 [2024-11-04 11:48:44.020696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.578 [2024-11-04 11:48:44.020770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.578 [2024-11-04 11:48:44.020954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.578 [2024-11-04 11:48:44.020978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.578 [2024-11-04 11:48:44.021275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.578 [2024-11-04 11:48:44.027529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:18.578 [2024-11-04 11:48:44.027569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:18.578 [2024-11-04 11:48:44.027777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.578 "name": "raid_bdev1", 00:16:18.578 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:18.578 "strip_size_kb": 64, 00:16:18.578 "state": "online", 00:16:18.578 "raid_level": "raid5f", 00:16:18.578 "superblock": true, 00:16:18.578 "num_base_bdevs": 3, 00:16:18.578 "num_base_bdevs_discovered": 3, 00:16:18.578 "num_base_bdevs_operational": 3, 00:16:18.578 "base_bdevs_list": [ 00:16:18.578 { 00:16:18.578 "name": "BaseBdev1", 00:16:18.578 "uuid": "f76290d1-199c-5054-9cb4-50961c5c09f8", 00:16:18.578 "is_configured": true, 00:16:18.578 "data_offset": 2048, 00:16:18.578 "data_size": 63488 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "name": "BaseBdev2", 00:16:18.578 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:18.578 "is_configured": true, 00:16:18.578 "data_offset": 2048, 00:16:18.578 "data_size": 63488 00:16:18.578 }, 00:16:18.578 { 00:16:18.578 "name": "BaseBdev3", 00:16:18.578 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:18.578 "is_configured": true, 00:16:18.578 "data_offset": 2048, 00:16:18.578 "data_size": 63488 00:16:18.578 } 00:16:18.578 ] 00:16:18.578 }' 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.578 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 [2024-11-04 11:48:44.513855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.147 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:19.406 [2024-11-04 11:48:44.801305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.406 /dev/nbd0 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.406 1+0 records in 00:16:19.406 1+0 records out 00:16:19.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524679 s, 7.8 MB/s 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:19.406 11:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:19.984 496+0 records in 00:16:19.984 496+0 records out 00:16:19.984 65011712 bytes (65 MB, 62 MiB) copied, 0.356704 s, 182 MB/s 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.984 [2024-11-04 11:48:45.456653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.984 [2024-11-04 11:48:45.473521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.984 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.260 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.260 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.260 "name": "raid_bdev1", 00:16:20.260 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:20.260 "strip_size_kb": 64, 00:16:20.260 "state": "online", 00:16:20.260 "raid_level": "raid5f", 00:16:20.260 "superblock": true, 00:16:20.260 "num_base_bdevs": 3, 00:16:20.260 "num_base_bdevs_discovered": 2, 00:16:20.260 "num_base_bdevs_operational": 2, 00:16:20.260 "base_bdevs_list": [ 00:16:20.260 { 00:16:20.260 "name": null, 00:16:20.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.260 "is_configured": false, 00:16:20.260 "data_offset": 0, 00:16:20.260 "data_size": 63488 00:16:20.260 }, 00:16:20.260 { 00:16:20.260 "name": "BaseBdev2", 00:16:20.260 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 }, 00:16:20.260 { 00:16:20.260 "name": "BaseBdev3", 00:16:20.260 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:20.260 "is_configured": true, 00:16:20.260 "data_offset": 2048, 00:16:20.260 "data_size": 63488 00:16:20.260 } 00:16:20.260 ] 00:16:20.260 }' 00:16:20.260 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.260 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.519 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.519 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.519 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.519 [2024-11-04 11:48:45.884844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.519 [2024-11-04 11:48:45.903777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:20.519 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.519 11:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:20.519 [2024-11-04 11:48:45.912374] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.456 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.456 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.457 "name": "raid_bdev1", 00:16:21.457 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:21.457 "strip_size_kb": 64, 00:16:21.457 "state": "online", 00:16:21.457 "raid_level": "raid5f", 00:16:21.457 "superblock": true, 00:16:21.457 "num_base_bdevs": 3, 00:16:21.457 "num_base_bdevs_discovered": 3, 00:16:21.457 "num_base_bdevs_operational": 3, 00:16:21.457 "process": { 00:16:21.457 "type": "rebuild", 00:16:21.457 "target": "spare", 00:16:21.457 "progress": { 00:16:21.457 "blocks": 18432, 00:16:21.457 "percent": 14 00:16:21.457 } 00:16:21.457 }, 00:16:21.457 "base_bdevs_list": [ 00:16:21.457 { 00:16:21.457 "name": "spare", 00:16:21.457 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:21.457 "is_configured": true, 00:16:21.457 "data_offset": 2048, 00:16:21.457 "data_size": 63488 00:16:21.457 }, 00:16:21.457 { 00:16:21.457 "name": "BaseBdev2", 00:16:21.457 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:21.457 "is_configured": true, 00:16:21.457 "data_offset": 2048, 00:16:21.457 "data_size": 63488 00:16:21.457 }, 00:16:21.457 { 00:16:21.457 "name": "BaseBdev3", 00:16:21.457 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:21.457 "is_configured": true, 00:16:21.457 "data_offset": 2048, 00:16:21.457 "data_size": 63488 00:16:21.457 } 00:16:21.457 ] 00:16:21.457 }' 00:16:21.457 11:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.716 [2024-11-04 11:48:47.043682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.716 [2024-11-04 11:48:47.122598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:21.716 [2024-11-04 11:48:47.122787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.716 [2024-11-04 11:48:47.122830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:21.716 [2024-11-04 11:48:47.122852] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.716 "name": "raid_bdev1", 00:16:21.716 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:21.716 "strip_size_kb": 64, 00:16:21.716 "state": "online", 00:16:21.716 "raid_level": "raid5f", 00:16:21.716 "superblock": true, 00:16:21.716 "num_base_bdevs": 3, 00:16:21.716 "num_base_bdevs_discovered": 2, 00:16:21.716 "num_base_bdevs_operational": 2, 00:16:21.716 "base_bdevs_list": [ 00:16:21.716 { 00:16:21.716 "name": null, 00:16:21.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.716 "is_configured": false, 00:16:21.716 "data_offset": 0, 00:16:21.716 "data_size": 63488 00:16:21.716 }, 00:16:21.716 { 00:16:21.716 "name": "BaseBdev2", 00:16:21.716 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:21.716 "is_configured": true, 00:16:21.716 "data_offset": 2048, 00:16:21.716 "data_size": 63488 00:16:21.716 }, 00:16:21.716 { 00:16:21.716 "name": "BaseBdev3", 00:16:21.716 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:21.716 "is_configured": true, 00:16:21.716 "data_offset": 2048, 00:16:21.716 "data_size": 63488 00:16:21.716 } 00:16:21.716 ] 00:16:21.716 }' 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.716 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.285 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.285 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.285 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.286 "name": "raid_bdev1", 00:16:22.286 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:22.286 "strip_size_kb": 64, 00:16:22.286 "state": "online", 00:16:22.286 "raid_level": "raid5f", 00:16:22.286 "superblock": true, 00:16:22.286 "num_base_bdevs": 3, 00:16:22.286 "num_base_bdevs_discovered": 2, 00:16:22.286 "num_base_bdevs_operational": 2, 00:16:22.286 "base_bdevs_list": [ 00:16:22.286 { 00:16:22.286 "name": null, 00:16:22.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.286 "is_configured": false, 00:16:22.286 "data_offset": 0, 00:16:22.286 "data_size": 63488 00:16:22.286 }, 00:16:22.286 { 00:16:22.286 "name": "BaseBdev2", 00:16:22.286 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:22.286 "is_configured": true, 00:16:22.286 "data_offset": 2048, 00:16:22.286 "data_size": 63488 00:16:22.286 }, 00:16:22.286 { 00:16:22.286 "name": "BaseBdev3", 00:16:22.286 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:22.286 "is_configured": true, 00:16:22.286 "data_offset": 2048, 00:16:22.286 "data_size": 63488 00:16:22.286 } 00:16:22.286 ] 00:16:22.286 }' 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.286 [2024-11-04 11:48:47.742145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.286 [2024-11-04 11:48:47.757782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.286 11:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:22.286 [2024-11-04 11:48:47.765618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.665 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.665 "name": "raid_bdev1", 00:16:23.665 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:23.665 "strip_size_kb": 64, 00:16:23.665 "state": "online", 00:16:23.666 "raid_level": "raid5f", 00:16:23.666 "superblock": true, 00:16:23.666 "num_base_bdevs": 3, 00:16:23.666 "num_base_bdevs_discovered": 3, 00:16:23.666 "num_base_bdevs_operational": 3, 00:16:23.666 "process": { 00:16:23.666 "type": "rebuild", 00:16:23.666 "target": "spare", 00:16:23.666 "progress": { 00:16:23.666 "blocks": 20480, 00:16:23.666 "percent": 16 00:16:23.666 } 00:16:23.666 }, 00:16:23.666 "base_bdevs_list": [ 00:16:23.666 { 00:16:23.666 "name": "spare", 00:16:23.666 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 }, 00:16:23.666 { 00:16:23.666 "name": "BaseBdev2", 00:16:23.666 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 }, 00:16:23.666 { 00:16:23.666 "name": "BaseBdev3", 00:16:23.666 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 } 00:16:23.666 ] 00:16:23.666 }' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:23.666 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=570 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.666 "name": "raid_bdev1", 00:16:23.666 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:23.666 "strip_size_kb": 64, 00:16:23.666 "state": "online", 00:16:23.666 "raid_level": "raid5f", 00:16:23.666 "superblock": true, 00:16:23.666 "num_base_bdevs": 3, 00:16:23.666 "num_base_bdevs_discovered": 3, 00:16:23.666 "num_base_bdevs_operational": 3, 00:16:23.666 "process": { 00:16:23.666 "type": "rebuild", 00:16:23.666 "target": "spare", 00:16:23.666 "progress": { 00:16:23.666 "blocks": 22528, 00:16:23.666 "percent": 17 00:16:23.666 } 00:16:23.666 }, 00:16:23.666 "base_bdevs_list": [ 00:16:23.666 { 00:16:23.666 "name": "spare", 00:16:23.666 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 }, 00:16:23.666 { 00:16:23.666 "name": "BaseBdev2", 00:16:23.666 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 }, 00:16:23.666 { 00:16:23.666 "name": "BaseBdev3", 00:16:23.666 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:23.666 "is_configured": true, 00:16:23.666 "data_offset": 2048, 00:16:23.666 "data_size": 63488 00:16:23.666 } 00:16:23.666 ] 00:16:23.666 }' 00:16:23.666 11:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.666 11:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.666 11:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.666 11:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.666 11:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.617 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.617 "name": "raid_bdev1", 00:16:24.617 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:24.617 "strip_size_kb": 64, 00:16:24.617 "state": "online", 00:16:24.617 "raid_level": "raid5f", 00:16:24.617 "superblock": true, 00:16:24.617 "num_base_bdevs": 3, 00:16:24.617 "num_base_bdevs_discovered": 3, 00:16:24.617 "num_base_bdevs_operational": 3, 00:16:24.617 "process": { 00:16:24.617 "type": "rebuild", 00:16:24.617 "target": "spare", 00:16:24.617 "progress": { 00:16:24.617 "blocks": 45056, 00:16:24.617 "percent": 35 00:16:24.617 } 00:16:24.617 }, 00:16:24.617 "base_bdevs_list": [ 00:16:24.617 { 00:16:24.617 "name": "spare", 00:16:24.617 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:24.617 "is_configured": true, 00:16:24.617 "data_offset": 2048, 00:16:24.617 "data_size": 63488 00:16:24.617 }, 00:16:24.617 { 00:16:24.617 "name": "BaseBdev2", 00:16:24.617 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:24.617 "is_configured": true, 00:16:24.617 "data_offset": 2048, 00:16:24.617 "data_size": 63488 00:16:24.617 }, 00:16:24.617 { 00:16:24.618 "name": "BaseBdev3", 00:16:24.618 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:24.618 "is_configured": true, 00:16:24.618 "data_offset": 2048, 00:16:24.618 "data_size": 63488 00:16:24.618 } 00:16:24.618 ] 00:16:24.618 }' 00:16:24.618 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.877 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.877 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.877 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.877 11:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.815 "name": "raid_bdev1", 00:16:25.815 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:25.815 "strip_size_kb": 64, 00:16:25.815 "state": "online", 00:16:25.815 "raid_level": "raid5f", 00:16:25.815 "superblock": true, 00:16:25.815 "num_base_bdevs": 3, 00:16:25.815 "num_base_bdevs_discovered": 3, 00:16:25.815 "num_base_bdevs_operational": 3, 00:16:25.815 "process": { 00:16:25.815 "type": "rebuild", 00:16:25.815 "target": "spare", 00:16:25.815 "progress": { 00:16:25.815 "blocks": 69632, 00:16:25.815 "percent": 54 00:16:25.815 } 00:16:25.815 }, 00:16:25.815 "base_bdevs_list": [ 00:16:25.815 { 00:16:25.815 "name": "spare", 00:16:25.815 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:25.815 "is_configured": true, 00:16:25.815 "data_offset": 2048, 00:16:25.815 "data_size": 63488 00:16:25.815 }, 00:16:25.815 { 00:16:25.815 "name": "BaseBdev2", 00:16:25.815 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:25.815 "is_configured": true, 00:16:25.815 "data_offset": 2048, 00:16:25.815 "data_size": 63488 00:16:25.815 }, 00:16:25.815 { 00:16:25.815 "name": "BaseBdev3", 00:16:25.815 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:25.815 "is_configured": true, 00:16:25.815 "data_offset": 2048, 00:16:25.815 "data_size": 63488 00:16:25.815 } 00:16:25.815 ] 00:16:25.815 }' 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.815 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.074 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.074 11:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.011 "name": "raid_bdev1", 00:16:27.011 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:27.011 "strip_size_kb": 64, 00:16:27.011 "state": "online", 00:16:27.011 "raid_level": "raid5f", 00:16:27.011 "superblock": true, 00:16:27.011 "num_base_bdevs": 3, 00:16:27.011 "num_base_bdevs_discovered": 3, 00:16:27.011 "num_base_bdevs_operational": 3, 00:16:27.011 "process": { 00:16:27.011 "type": "rebuild", 00:16:27.011 "target": "spare", 00:16:27.011 "progress": { 00:16:27.011 "blocks": 92160, 00:16:27.011 "percent": 72 00:16:27.011 } 00:16:27.011 }, 00:16:27.011 "base_bdevs_list": [ 00:16:27.011 { 00:16:27.011 "name": "spare", 00:16:27.011 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:27.011 "is_configured": true, 00:16:27.011 "data_offset": 2048, 00:16:27.011 "data_size": 63488 00:16:27.011 }, 00:16:27.011 { 00:16:27.011 "name": "BaseBdev2", 00:16:27.011 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:27.011 "is_configured": true, 00:16:27.011 "data_offset": 2048, 00:16:27.011 "data_size": 63488 00:16:27.011 }, 00:16:27.011 { 00:16:27.011 "name": "BaseBdev3", 00:16:27.011 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:27.011 "is_configured": true, 00:16:27.011 "data_offset": 2048, 00:16:27.011 "data_size": 63488 00:16:27.011 } 00:16:27.011 ] 00:16:27.011 }' 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.011 11:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.405 "name": "raid_bdev1", 00:16:28.405 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:28.405 "strip_size_kb": 64, 00:16:28.405 "state": "online", 00:16:28.405 "raid_level": "raid5f", 00:16:28.405 "superblock": true, 00:16:28.405 "num_base_bdevs": 3, 00:16:28.405 "num_base_bdevs_discovered": 3, 00:16:28.405 "num_base_bdevs_operational": 3, 00:16:28.405 "process": { 00:16:28.405 "type": "rebuild", 00:16:28.405 "target": "spare", 00:16:28.405 "progress": { 00:16:28.405 "blocks": 116736, 00:16:28.405 "percent": 91 00:16:28.405 } 00:16:28.405 }, 00:16:28.405 "base_bdevs_list": [ 00:16:28.405 { 00:16:28.405 "name": "spare", 00:16:28.405 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:28.405 "is_configured": true, 00:16:28.405 "data_offset": 2048, 00:16:28.405 "data_size": 63488 00:16:28.405 }, 00:16:28.405 { 00:16:28.405 "name": "BaseBdev2", 00:16:28.405 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:28.405 "is_configured": true, 00:16:28.405 "data_offset": 2048, 00:16:28.405 "data_size": 63488 00:16:28.405 }, 00:16:28.405 { 00:16:28.405 "name": "BaseBdev3", 00:16:28.405 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:28.405 "is_configured": true, 00:16:28.405 "data_offset": 2048, 00:16:28.405 "data_size": 63488 00:16:28.405 } 00:16:28.405 ] 00:16:28.405 }' 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.405 11:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.665 [2024-11-04 11:48:54.018915] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:28.665 [2024-11-04 11:48:54.019102] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:28.665 [2024-11-04 11:48:54.019283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.234 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.234 "name": "raid_bdev1", 00:16:29.234 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:29.234 "strip_size_kb": 64, 00:16:29.234 "state": "online", 00:16:29.234 "raid_level": "raid5f", 00:16:29.234 "superblock": true, 00:16:29.234 "num_base_bdevs": 3, 00:16:29.234 "num_base_bdevs_discovered": 3, 00:16:29.234 "num_base_bdevs_operational": 3, 00:16:29.234 "base_bdevs_list": [ 00:16:29.234 { 00:16:29.234 "name": "spare", 00:16:29.234 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:29.234 "is_configured": true, 00:16:29.234 "data_offset": 2048, 00:16:29.234 "data_size": 63488 00:16:29.234 }, 00:16:29.234 { 00:16:29.234 "name": "BaseBdev2", 00:16:29.234 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:29.234 "is_configured": true, 00:16:29.234 "data_offset": 2048, 00:16:29.234 "data_size": 63488 00:16:29.234 }, 00:16:29.234 { 00:16:29.234 "name": "BaseBdev3", 00:16:29.235 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:29.235 "is_configured": true, 00:16:29.235 "data_offset": 2048, 00:16:29.235 "data_size": 63488 00:16:29.235 } 00:16:29.235 ] 00:16:29.235 }' 00:16:29.235 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.494 "name": "raid_bdev1", 00:16:29.494 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:29.494 "strip_size_kb": 64, 00:16:29.494 "state": "online", 00:16:29.494 "raid_level": "raid5f", 00:16:29.494 "superblock": true, 00:16:29.494 "num_base_bdevs": 3, 00:16:29.494 "num_base_bdevs_discovered": 3, 00:16:29.494 "num_base_bdevs_operational": 3, 00:16:29.494 "base_bdevs_list": [ 00:16:29.494 { 00:16:29.494 "name": "spare", 00:16:29.494 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 }, 00:16:29.494 { 00:16:29.494 "name": "BaseBdev2", 00:16:29.494 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 }, 00:16:29.494 { 00:16:29.494 "name": "BaseBdev3", 00:16:29.494 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 } 00:16:29.494 ] 00:16:29.494 }' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.494 "name": "raid_bdev1", 00:16:29.494 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:29.494 "strip_size_kb": 64, 00:16:29.494 "state": "online", 00:16:29.494 "raid_level": "raid5f", 00:16:29.494 "superblock": true, 00:16:29.494 "num_base_bdevs": 3, 00:16:29.494 "num_base_bdevs_discovered": 3, 00:16:29.494 "num_base_bdevs_operational": 3, 00:16:29.494 "base_bdevs_list": [ 00:16:29.494 { 00:16:29.494 "name": "spare", 00:16:29.494 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 }, 00:16:29.494 { 00:16:29.494 "name": "BaseBdev2", 00:16:29.494 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 }, 00:16:29.494 { 00:16:29.494 "name": "BaseBdev3", 00:16:29.494 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:29.494 "is_configured": true, 00:16:29.494 "data_offset": 2048, 00:16:29.494 "data_size": 63488 00:16:29.494 } 00:16:29.494 ] 00:16:29.494 }' 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.494 11:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 [2024-11-04 11:48:55.374947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.062 [2024-11-04 11:48:55.375023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.062 [2024-11-04 11:48:55.375167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.062 [2024-11-04 11:48:55.375329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.062 [2024-11-04 11:48:55.375389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.062 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:30.321 /dev/nbd0 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.321 1+0 records in 00:16:30.321 1+0 records out 00:16:30.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366419 s, 11.2 MB/s 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.321 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:30.579 /dev/nbd1 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.579 1+0 records in 00:16:30.579 1+0 records out 00:16:30.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280131 s, 14.6 MB/s 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:30.579 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.580 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:30.580 11:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.838 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.109 [2024-11-04 11:48:56.612868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:31.109 [2024-11-04 11:48:56.613015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.109 [2024-11-04 11:48:56.613067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:31.109 [2024-11-04 11:48:56.613121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.109 [2024-11-04 11:48:56.615723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.109 [2024-11-04 11:48:56.615808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:31.109 [2024-11-04 11:48:56.615959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:31.109 [2024-11-04 11:48:56.616079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.109 [2024-11-04 11:48:56.616249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.109 [2024-11-04 11:48:56.616358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.109 spare 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.109 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.380 [2024-11-04 11:48:56.716343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:31.380 [2024-11-04 11:48:56.716505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.380 [2024-11-04 11:48:56.716937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:31.380 [2024-11-04 11:48:56.723664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:31.380 [2024-11-04 11:48:56.723688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:31.380 [2024-11-04 11:48:56.723934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.380 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.381 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.381 "name": "raid_bdev1", 00:16:31.381 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:31.381 "strip_size_kb": 64, 00:16:31.381 "state": "online", 00:16:31.381 "raid_level": "raid5f", 00:16:31.381 "superblock": true, 00:16:31.381 "num_base_bdevs": 3, 00:16:31.381 "num_base_bdevs_discovered": 3, 00:16:31.381 "num_base_bdevs_operational": 3, 00:16:31.381 "base_bdevs_list": [ 00:16:31.381 { 00:16:31.381 "name": "spare", 00:16:31.381 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 }, 00:16:31.381 { 00:16:31.381 "name": "BaseBdev2", 00:16:31.381 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 }, 00:16:31.381 { 00:16:31.381 "name": "BaseBdev3", 00:16:31.381 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:31.381 "is_configured": true, 00:16:31.381 "data_offset": 2048, 00:16:31.381 "data_size": 63488 00:16:31.381 } 00:16:31.381 ] 00:16:31.381 }' 00:16:31.381 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.381 11:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.951 "name": "raid_bdev1", 00:16:31.951 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:31.951 "strip_size_kb": 64, 00:16:31.951 "state": "online", 00:16:31.951 "raid_level": "raid5f", 00:16:31.951 "superblock": true, 00:16:31.951 "num_base_bdevs": 3, 00:16:31.951 "num_base_bdevs_discovered": 3, 00:16:31.951 "num_base_bdevs_operational": 3, 00:16:31.951 "base_bdevs_list": [ 00:16:31.951 { 00:16:31.951 "name": "spare", 00:16:31.951 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:31.951 "is_configured": true, 00:16:31.951 "data_offset": 2048, 00:16:31.951 "data_size": 63488 00:16:31.951 }, 00:16:31.951 { 00:16:31.951 "name": "BaseBdev2", 00:16:31.951 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:31.951 "is_configured": true, 00:16:31.951 "data_offset": 2048, 00:16:31.951 "data_size": 63488 00:16:31.951 }, 00:16:31.951 { 00:16:31.951 "name": "BaseBdev3", 00:16:31.951 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:31.951 "is_configured": true, 00:16:31.951 "data_offset": 2048, 00:16:31.951 "data_size": 63488 00:16:31.951 } 00:16:31.951 ] 00:16:31.951 }' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.951 [2024-11-04 11:48:57.350607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.951 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.951 "name": "raid_bdev1", 00:16:31.951 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:31.951 "strip_size_kb": 64, 00:16:31.951 "state": "online", 00:16:31.951 "raid_level": "raid5f", 00:16:31.951 "superblock": true, 00:16:31.952 "num_base_bdevs": 3, 00:16:31.952 "num_base_bdevs_discovered": 2, 00:16:31.952 "num_base_bdevs_operational": 2, 00:16:31.952 "base_bdevs_list": [ 00:16:31.952 { 00:16:31.952 "name": null, 00:16:31.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.952 "is_configured": false, 00:16:31.952 "data_offset": 0, 00:16:31.952 "data_size": 63488 00:16:31.952 }, 00:16:31.952 { 00:16:31.952 "name": "BaseBdev2", 00:16:31.952 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:31.952 "is_configured": true, 00:16:31.952 "data_offset": 2048, 00:16:31.952 "data_size": 63488 00:16:31.952 }, 00:16:31.952 { 00:16:31.952 "name": "BaseBdev3", 00:16:31.952 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:31.952 "is_configured": true, 00:16:31.952 "data_offset": 2048, 00:16:31.952 "data_size": 63488 00:16:31.952 } 00:16:31.952 ] 00:16:31.952 }' 00:16:31.952 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.952 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:32.521 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.521 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.521 [2024-11-04 11:48:57.785945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.521 [2024-11-04 11:48:57.786268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:32.521 [2024-11-04 11:48:57.786345] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:32.521 [2024-11-04 11:48:57.786445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.521 [2024-11-04 11:48:57.802593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:32.521 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.521 11:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:32.521 [2024-11-04 11:48:57.810520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.459 "name": "raid_bdev1", 00:16:33.459 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:33.459 "strip_size_kb": 64, 00:16:33.459 "state": "online", 00:16:33.459 "raid_level": "raid5f", 00:16:33.459 "superblock": true, 00:16:33.459 "num_base_bdevs": 3, 00:16:33.459 "num_base_bdevs_discovered": 3, 00:16:33.459 "num_base_bdevs_operational": 3, 00:16:33.459 "process": { 00:16:33.459 "type": "rebuild", 00:16:33.459 "target": "spare", 00:16:33.459 "progress": { 00:16:33.459 "blocks": 20480, 00:16:33.459 "percent": 16 00:16:33.459 } 00:16:33.459 }, 00:16:33.459 "base_bdevs_list": [ 00:16:33.459 { 00:16:33.459 "name": "spare", 00:16:33.459 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:33.459 "is_configured": true, 00:16:33.459 "data_offset": 2048, 00:16:33.459 "data_size": 63488 00:16:33.459 }, 00:16:33.459 { 00:16:33.459 "name": "BaseBdev2", 00:16:33.459 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:33.459 "is_configured": true, 00:16:33.459 "data_offset": 2048, 00:16:33.459 "data_size": 63488 00:16:33.459 }, 00:16:33.459 { 00:16:33.459 "name": "BaseBdev3", 00:16:33.459 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:33.459 "is_configured": true, 00:16:33.459 "data_offset": 2048, 00:16:33.459 "data_size": 63488 00:16:33.459 } 00:16:33.459 ] 00:16:33.459 }' 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:33.459 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.460 11:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.460 [2024-11-04 11:48:58.965663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.719 [2024-11-04 11:48:59.020763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:33.719 [2024-11-04 11:48:59.020892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.719 [2024-11-04 11:48:59.020930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.719 [2024-11-04 11:48:59.020954] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.719 "name": "raid_bdev1", 00:16:33.719 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:33.719 "strip_size_kb": 64, 00:16:33.719 "state": "online", 00:16:33.719 "raid_level": "raid5f", 00:16:33.719 "superblock": true, 00:16:33.719 "num_base_bdevs": 3, 00:16:33.719 "num_base_bdevs_discovered": 2, 00:16:33.719 "num_base_bdevs_operational": 2, 00:16:33.719 "base_bdevs_list": [ 00:16:33.719 { 00:16:33.719 "name": null, 00:16:33.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.719 "is_configured": false, 00:16:33.719 "data_offset": 0, 00:16:33.719 "data_size": 63488 00:16:33.719 }, 00:16:33.719 { 00:16:33.719 "name": "BaseBdev2", 00:16:33.719 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:33.719 "is_configured": true, 00:16:33.719 "data_offset": 2048, 00:16:33.719 "data_size": 63488 00:16:33.719 }, 00:16:33.719 { 00:16:33.719 "name": "BaseBdev3", 00:16:33.719 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:33.719 "is_configured": true, 00:16:33.719 "data_offset": 2048, 00:16:33.719 "data_size": 63488 00:16:33.719 } 00:16:33.719 ] 00:16:33.719 }' 00:16:33.719 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.720 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.289 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.289 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.289 [2024-11-04 11:48:59.524043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.289 [2024-11-04 11:48:59.524202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.289 [2024-11-04 11:48:59.524233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:34.289 [2024-11-04 11:48:59.524250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.289 [2024-11-04 11:48:59.524865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.289 [2024-11-04 11:48:59.524893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.289 [2024-11-04 11:48:59.525013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.289 [2024-11-04 11:48:59.525033] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:34.289 [2024-11-04 11:48:59.525046] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:34.289 [2024-11-04 11:48:59.525074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.290 spare 00:16:34.290 [2024-11-04 11:48:59.543909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:34.290 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.290 11:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:34.290 [2024-11-04 11:48:59.552802] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.228 "name": "raid_bdev1", 00:16:35.228 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:35.228 "strip_size_kb": 64, 00:16:35.228 "state": "online", 00:16:35.228 "raid_level": "raid5f", 00:16:35.228 "superblock": true, 00:16:35.228 "num_base_bdevs": 3, 00:16:35.228 "num_base_bdevs_discovered": 3, 00:16:35.228 "num_base_bdevs_operational": 3, 00:16:35.228 "process": { 00:16:35.228 "type": "rebuild", 00:16:35.228 "target": "spare", 00:16:35.228 "progress": { 00:16:35.228 "blocks": 20480, 00:16:35.228 "percent": 16 00:16:35.228 } 00:16:35.228 }, 00:16:35.228 "base_bdevs_list": [ 00:16:35.228 { 00:16:35.228 "name": "spare", 00:16:35.228 "uuid": "359e9ee5-1697-560a-931d-051395ca7bd1", 00:16:35.228 "is_configured": true, 00:16:35.228 "data_offset": 2048, 00:16:35.228 "data_size": 63488 00:16:35.228 }, 00:16:35.228 { 00:16:35.228 "name": "BaseBdev2", 00:16:35.228 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:35.228 "is_configured": true, 00:16:35.228 "data_offset": 2048, 00:16:35.228 "data_size": 63488 00:16:35.228 }, 00:16:35.228 { 00:16:35.228 "name": "BaseBdev3", 00:16:35.228 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:35.228 "is_configured": true, 00:16:35.228 "data_offset": 2048, 00:16:35.228 "data_size": 63488 00:16:35.228 } 00:16:35.228 ] 00:16:35.228 }' 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.228 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.229 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.229 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.229 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.229 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.229 [2024-11-04 11:49:00.700054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.488 [2024-11-04 11:49:00.763076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.488 [2024-11-04 11:49:00.763246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.488 [2024-11-04 11:49:00.763305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.488 [2024-11-04 11:49:00.763330] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.488 "name": "raid_bdev1", 00:16:35.488 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:35.488 "strip_size_kb": 64, 00:16:35.488 "state": "online", 00:16:35.488 "raid_level": "raid5f", 00:16:35.488 "superblock": true, 00:16:35.488 "num_base_bdevs": 3, 00:16:35.488 "num_base_bdevs_discovered": 2, 00:16:35.488 "num_base_bdevs_operational": 2, 00:16:35.488 "base_bdevs_list": [ 00:16:35.488 { 00:16:35.488 "name": null, 00:16:35.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.488 "is_configured": false, 00:16:35.488 "data_offset": 0, 00:16:35.488 "data_size": 63488 00:16:35.488 }, 00:16:35.488 { 00:16:35.488 "name": "BaseBdev2", 00:16:35.488 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:35.488 "is_configured": true, 00:16:35.488 "data_offset": 2048, 00:16:35.488 "data_size": 63488 00:16:35.488 }, 00:16:35.488 { 00:16:35.488 "name": "BaseBdev3", 00:16:35.488 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:35.488 "is_configured": true, 00:16:35.488 "data_offset": 2048, 00:16:35.488 "data_size": 63488 00:16:35.488 } 00:16:35.488 ] 00:16:35.488 }' 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.488 11:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.058 "name": "raid_bdev1", 00:16:36.058 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:36.058 "strip_size_kb": 64, 00:16:36.058 "state": "online", 00:16:36.058 "raid_level": "raid5f", 00:16:36.058 "superblock": true, 00:16:36.058 "num_base_bdevs": 3, 00:16:36.058 "num_base_bdevs_discovered": 2, 00:16:36.058 "num_base_bdevs_operational": 2, 00:16:36.058 "base_bdevs_list": [ 00:16:36.058 { 00:16:36.058 "name": null, 00:16:36.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.058 "is_configured": false, 00:16:36.058 "data_offset": 0, 00:16:36.058 "data_size": 63488 00:16:36.058 }, 00:16:36.058 { 00:16:36.058 "name": "BaseBdev2", 00:16:36.058 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:36.058 "is_configured": true, 00:16:36.058 "data_offset": 2048, 00:16:36.058 "data_size": 63488 00:16:36.058 }, 00:16:36.058 { 00:16:36.058 "name": "BaseBdev3", 00:16:36.058 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:36.058 "is_configured": true, 00:16:36.058 "data_offset": 2048, 00:16:36.058 "data_size": 63488 00:16:36.058 } 00:16:36.058 ] 00:16:36.058 }' 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.058 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.059 [2024-11-04 11:49:01.421223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.059 [2024-11-04 11:49:01.421354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.059 [2024-11-04 11:49:01.421411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:36.059 [2024-11-04 11:49:01.421457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.059 [2024-11-04 11:49:01.422003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.059 [2024-11-04 11:49:01.422075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.059 [2024-11-04 11:49:01.422230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:36.059 [2024-11-04 11:49:01.422290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.059 [2024-11-04 11:49:01.422378] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:36.059 [2024-11-04 11:49:01.422437] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:36.059 BaseBdev1 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.059 11:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.998 "name": "raid_bdev1", 00:16:36.998 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:36.998 "strip_size_kb": 64, 00:16:36.998 "state": "online", 00:16:36.998 "raid_level": "raid5f", 00:16:36.998 "superblock": true, 00:16:36.998 "num_base_bdevs": 3, 00:16:36.998 "num_base_bdevs_discovered": 2, 00:16:36.998 "num_base_bdevs_operational": 2, 00:16:36.998 "base_bdevs_list": [ 00:16:36.998 { 00:16:36.998 "name": null, 00:16:36.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.998 "is_configured": false, 00:16:36.998 "data_offset": 0, 00:16:36.998 "data_size": 63488 00:16:36.998 }, 00:16:36.998 { 00:16:36.998 "name": "BaseBdev2", 00:16:36.998 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:36.998 "is_configured": true, 00:16:36.998 "data_offset": 2048, 00:16:36.998 "data_size": 63488 00:16:36.998 }, 00:16:36.998 { 00:16:36.998 "name": "BaseBdev3", 00:16:36.998 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:36.998 "is_configured": true, 00:16:36.998 "data_offset": 2048, 00:16:36.998 "data_size": 63488 00:16:36.998 } 00:16:36.998 ] 00:16:36.998 }' 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.998 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.568 "name": "raid_bdev1", 00:16:37.568 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:37.568 "strip_size_kb": 64, 00:16:37.568 "state": "online", 00:16:37.568 "raid_level": "raid5f", 00:16:37.568 "superblock": true, 00:16:37.568 "num_base_bdevs": 3, 00:16:37.568 "num_base_bdevs_discovered": 2, 00:16:37.568 "num_base_bdevs_operational": 2, 00:16:37.568 "base_bdevs_list": [ 00:16:37.568 { 00:16:37.568 "name": null, 00:16:37.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.568 "is_configured": false, 00:16:37.568 "data_offset": 0, 00:16:37.568 "data_size": 63488 00:16:37.568 }, 00:16:37.568 { 00:16:37.568 "name": "BaseBdev2", 00:16:37.568 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:37.568 "is_configured": true, 00:16:37.568 "data_offset": 2048, 00:16:37.568 "data_size": 63488 00:16:37.568 }, 00:16:37.568 { 00:16:37.568 "name": "BaseBdev3", 00:16:37.568 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:37.568 "is_configured": true, 00:16:37.568 "data_offset": 2048, 00:16:37.568 "data_size": 63488 00:16:37.568 } 00:16:37.568 ] 00:16:37.568 }' 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.568 11:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.568 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.568 [2024-11-04 11:49:03.034659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.568 [2024-11-04 11:49:03.034908] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.568 [2024-11-04 11:49:03.034993] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:37.568 request: 00:16:37.568 { 00:16:37.568 "base_bdev": "BaseBdev1", 00:16:37.568 "raid_bdev": "raid_bdev1", 00:16:37.568 "method": "bdev_raid_add_base_bdev", 00:16:37.568 "req_id": 1 00:16:37.568 } 00:16:37.568 Got JSON-RPC error response 00:16:37.568 response: 00:16:37.568 { 00:16:37.569 "code": -22, 00:16:37.569 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:37.569 } 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:37.569 11:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.948 "name": "raid_bdev1", 00:16:38.948 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:38.948 "strip_size_kb": 64, 00:16:38.948 "state": "online", 00:16:38.948 "raid_level": "raid5f", 00:16:38.948 "superblock": true, 00:16:38.948 "num_base_bdevs": 3, 00:16:38.948 "num_base_bdevs_discovered": 2, 00:16:38.948 "num_base_bdevs_operational": 2, 00:16:38.948 "base_bdevs_list": [ 00:16:38.948 { 00:16:38.948 "name": null, 00:16:38.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.948 "is_configured": false, 00:16:38.948 "data_offset": 0, 00:16:38.948 "data_size": 63488 00:16:38.948 }, 00:16:38.948 { 00:16:38.948 "name": "BaseBdev2", 00:16:38.948 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:38.948 "is_configured": true, 00:16:38.948 "data_offset": 2048, 00:16:38.948 "data_size": 63488 00:16:38.948 }, 00:16:38.948 { 00:16:38.948 "name": "BaseBdev3", 00:16:38.948 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:38.948 "is_configured": true, 00:16:38.948 "data_offset": 2048, 00:16:38.948 "data_size": 63488 00:16:38.948 } 00:16:38.948 ] 00:16:38.948 }' 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.948 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.208 "name": "raid_bdev1", 00:16:39.208 "uuid": "cb18b8ec-6c70-437e-867a-8b4d8ead56c6", 00:16:39.208 "strip_size_kb": 64, 00:16:39.208 "state": "online", 00:16:39.208 "raid_level": "raid5f", 00:16:39.208 "superblock": true, 00:16:39.208 "num_base_bdevs": 3, 00:16:39.208 "num_base_bdevs_discovered": 2, 00:16:39.208 "num_base_bdevs_operational": 2, 00:16:39.208 "base_bdevs_list": [ 00:16:39.208 { 00:16:39.208 "name": null, 00:16:39.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.208 "is_configured": false, 00:16:39.208 "data_offset": 0, 00:16:39.208 "data_size": 63488 00:16:39.208 }, 00:16:39.208 { 00:16:39.208 "name": "BaseBdev2", 00:16:39.208 "uuid": "264cd479-ef38-5525-9dc0-bf4130846d69", 00:16:39.208 "is_configured": true, 00:16:39.208 "data_offset": 2048, 00:16:39.208 "data_size": 63488 00:16:39.208 }, 00:16:39.208 { 00:16:39.208 "name": "BaseBdev3", 00:16:39.208 "uuid": "7e8668a0-8ecd-57d5-930e-0233e53e4079", 00:16:39.208 "is_configured": true, 00:16:39.208 "data_offset": 2048, 00:16:39.208 "data_size": 63488 00:16:39.208 } 00:16:39.208 ] 00:16:39.208 }' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82257 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82257 ']' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82257 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82257 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:39.208 killing process with pid 82257 00:16:39.208 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.208 00:16:39.208 Latency(us) 00:16:39.208 [2024-11-04T11:49:04.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.208 [2024-11-04T11:49:04.730Z] =================================================================================================================== 00:16:39.208 [2024-11-04T11:49:04.730Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82257' 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82257 00:16:39.208 [2024-11-04 11:49:04.699238] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.208 11:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82257 00:16:39.208 [2024-11-04 11:49:04.699387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.208 [2024-11-04 11:49:04.699477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.208 [2024-11-04 11:49:04.699492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:39.777 [2024-11-04 11:49:05.118941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.716 11:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:40.716 ************************************ 00:16:40.716 END TEST raid5f_rebuild_test_sb 00:16:40.716 ************************************ 00:16:40.716 00:16:40.716 real 0m23.358s 00:16:40.716 user 0m29.967s 00:16:40.716 sys 0m2.713s 00:16:40.716 11:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:40.716 11:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 11:49:06 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:40.976 11:49:06 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:40.976 11:49:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:40.976 11:49:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:40.976 11:49:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 ************************************ 00:16:40.976 START TEST raid5f_state_function_test 00:16:40.976 ************************************ 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83013 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83013' 00:16:40.976 Process raid pid: 83013 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83013 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83013 ']' 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.976 11:49:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.976 [2024-11-04 11:49:06.360321] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:16:40.976 [2024-11-04 11:49:06.360549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.235 [2024-11-04 11:49:06.532729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.235 [2024-11-04 11:49:06.645733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.493 [2024-11-04 11:49:06.858633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.493 [2024-11-04 11:49:06.858667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.752 [2024-11-04 11:49:07.239470] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.752 [2024-11-04 11:49:07.239564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.752 [2024-11-04 11:49:07.239594] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.752 [2024-11-04 11:49:07.239617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.752 [2024-11-04 11:49:07.239636] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.752 [2024-11-04 11:49:07.239656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.752 [2024-11-04 11:49:07.239673] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:41.752 [2024-11-04 11:49:07.239708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.752 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.012 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.012 "name": "Existed_Raid", 00:16:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.012 "strip_size_kb": 64, 00:16:42.012 "state": "configuring", 00:16:42.012 "raid_level": "raid5f", 00:16:42.012 "superblock": false, 00:16:42.012 "num_base_bdevs": 4, 00:16:42.012 "num_base_bdevs_discovered": 0, 00:16:42.012 "num_base_bdevs_operational": 4, 00:16:42.012 "base_bdevs_list": [ 00:16:42.012 { 00:16:42.012 "name": "BaseBdev1", 00:16:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.012 "is_configured": false, 00:16:42.012 "data_offset": 0, 00:16:42.012 "data_size": 0 00:16:42.012 }, 00:16:42.012 { 00:16:42.012 "name": "BaseBdev2", 00:16:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.012 "is_configured": false, 00:16:42.012 "data_offset": 0, 00:16:42.012 "data_size": 0 00:16:42.012 }, 00:16:42.012 { 00:16:42.012 "name": "BaseBdev3", 00:16:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.012 "is_configured": false, 00:16:42.012 "data_offset": 0, 00:16:42.012 "data_size": 0 00:16:42.012 }, 00:16:42.012 { 00:16:42.012 "name": "BaseBdev4", 00:16:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.012 "is_configured": false, 00:16:42.012 "data_offset": 0, 00:16:42.012 "data_size": 0 00:16:42.012 } 00:16:42.012 ] 00:16:42.012 }' 00:16:42.012 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.012 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.274 [2024-11-04 11:49:07.710620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.274 [2024-11-04 11:49:07.710737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.274 [2024-11-04 11:49:07.722581] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:42.274 [2024-11-04 11:49:07.722662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:42.274 [2024-11-04 11:49:07.722689] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.274 [2024-11-04 11:49:07.722711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.274 [2024-11-04 11:49:07.722728] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.274 [2024-11-04 11:49:07.722748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.274 [2024-11-04 11:49:07.722765] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.274 [2024-11-04 11:49:07.722784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.274 [2024-11-04 11:49:07.771216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.274 BaseBdev1 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.274 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.535 [ 00:16:42.535 { 00:16:42.535 "name": "BaseBdev1", 00:16:42.535 "aliases": [ 00:16:42.535 "9b35fb57-4f6d-42e6-a865-8dde0bc5b121" 00:16:42.535 ], 00:16:42.535 "product_name": "Malloc disk", 00:16:42.535 "block_size": 512, 00:16:42.535 "num_blocks": 65536, 00:16:42.535 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:42.535 "assigned_rate_limits": { 00:16:42.535 "rw_ios_per_sec": 0, 00:16:42.535 "rw_mbytes_per_sec": 0, 00:16:42.535 "r_mbytes_per_sec": 0, 00:16:42.535 "w_mbytes_per_sec": 0 00:16:42.535 }, 00:16:42.535 "claimed": true, 00:16:42.535 "claim_type": "exclusive_write", 00:16:42.535 "zoned": false, 00:16:42.535 "supported_io_types": { 00:16:42.535 "read": true, 00:16:42.535 "write": true, 00:16:42.535 "unmap": true, 00:16:42.535 "flush": true, 00:16:42.535 "reset": true, 00:16:42.535 "nvme_admin": false, 00:16:42.535 "nvme_io": false, 00:16:42.535 "nvme_io_md": false, 00:16:42.535 "write_zeroes": true, 00:16:42.535 "zcopy": true, 00:16:42.535 "get_zone_info": false, 00:16:42.535 "zone_management": false, 00:16:42.535 "zone_append": false, 00:16:42.535 "compare": false, 00:16:42.535 "compare_and_write": false, 00:16:42.535 "abort": true, 00:16:42.535 "seek_hole": false, 00:16:42.535 "seek_data": false, 00:16:42.535 "copy": true, 00:16:42.535 "nvme_iov_md": false 00:16:42.535 }, 00:16:42.535 "memory_domains": [ 00:16:42.535 { 00:16:42.535 "dma_device_id": "system", 00:16:42.535 "dma_device_type": 1 00:16:42.535 }, 00:16:42.535 { 00:16:42.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.535 "dma_device_type": 2 00:16:42.535 } 00:16:42.535 ], 00:16:42.535 "driver_specific": {} 00:16:42.535 } 00:16:42.535 ] 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.535 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.535 "name": "Existed_Raid", 00:16:42.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.536 "strip_size_kb": 64, 00:16:42.536 "state": "configuring", 00:16:42.536 "raid_level": "raid5f", 00:16:42.536 "superblock": false, 00:16:42.536 "num_base_bdevs": 4, 00:16:42.536 "num_base_bdevs_discovered": 1, 00:16:42.536 "num_base_bdevs_operational": 4, 00:16:42.536 "base_bdevs_list": [ 00:16:42.536 { 00:16:42.536 "name": "BaseBdev1", 00:16:42.536 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:42.536 "is_configured": true, 00:16:42.536 "data_offset": 0, 00:16:42.536 "data_size": 65536 00:16:42.536 }, 00:16:42.536 { 00:16:42.536 "name": "BaseBdev2", 00:16:42.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.536 "is_configured": false, 00:16:42.536 "data_offset": 0, 00:16:42.536 "data_size": 0 00:16:42.536 }, 00:16:42.536 { 00:16:42.536 "name": "BaseBdev3", 00:16:42.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.536 "is_configured": false, 00:16:42.536 "data_offset": 0, 00:16:42.536 "data_size": 0 00:16:42.536 }, 00:16:42.536 { 00:16:42.536 "name": "BaseBdev4", 00:16:42.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.536 "is_configured": false, 00:16:42.536 "data_offset": 0, 00:16:42.536 "data_size": 0 00:16:42.536 } 00:16:42.536 ] 00:16:42.536 }' 00:16:42.536 11:49:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.536 11:49:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.796 [2024-11-04 11:49:08.258489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.796 [2024-11-04 11:49:08.258589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.796 [2024-11-04 11:49:08.270520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.796 [2024-11-04 11:49:08.272475] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.796 [2024-11-04 11:49:08.272564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.796 [2024-11-04 11:49:08.272579] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.796 [2024-11-04 11:49:08.272590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.796 [2024-11-04 11:49:08.272597] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:42.796 [2024-11-04 11:49:08.272606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.796 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.056 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.056 "name": "Existed_Raid", 00:16:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.056 "strip_size_kb": 64, 00:16:43.056 "state": "configuring", 00:16:43.056 "raid_level": "raid5f", 00:16:43.056 "superblock": false, 00:16:43.056 "num_base_bdevs": 4, 00:16:43.056 "num_base_bdevs_discovered": 1, 00:16:43.056 "num_base_bdevs_operational": 4, 00:16:43.056 "base_bdevs_list": [ 00:16:43.056 { 00:16:43.056 "name": "BaseBdev1", 00:16:43.056 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:43.056 "is_configured": true, 00:16:43.056 "data_offset": 0, 00:16:43.056 "data_size": 65536 00:16:43.056 }, 00:16:43.056 { 00:16:43.056 "name": "BaseBdev2", 00:16:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.056 "is_configured": false, 00:16:43.056 "data_offset": 0, 00:16:43.056 "data_size": 0 00:16:43.056 }, 00:16:43.056 { 00:16:43.056 "name": "BaseBdev3", 00:16:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.056 "is_configured": false, 00:16:43.056 "data_offset": 0, 00:16:43.056 "data_size": 0 00:16:43.056 }, 00:16:43.056 { 00:16:43.056 "name": "BaseBdev4", 00:16:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.056 "is_configured": false, 00:16:43.056 "data_offset": 0, 00:16:43.056 "data_size": 0 00:16:43.056 } 00:16:43.056 ] 00:16:43.056 }' 00:16:43.056 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.056 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.315 [2024-11-04 11:49:08.765587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.315 BaseBdev2 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:43.315 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.316 [ 00:16:43.316 { 00:16:43.316 "name": "BaseBdev2", 00:16:43.316 "aliases": [ 00:16:43.316 "f79c87ad-72bd-4f56-ba98-6246729f0c80" 00:16:43.316 ], 00:16:43.316 "product_name": "Malloc disk", 00:16:43.316 "block_size": 512, 00:16:43.316 "num_blocks": 65536, 00:16:43.316 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:43.316 "assigned_rate_limits": { 00:16:43.316 "rw_ios_per_sec": 0, 00:16:43.316 "rw_mbytes_per_sec": 0, 00:16:43.316 "r_mbytes_per_sec": 0, 00:16:43.316 "w_mbytes_per_sec": 0 00:16:43.316 }, 00:16:43.316 "claimed": true, 00:16:43.316 "claim_type": "exclusive_write", 00:16:43.316 "zoned": false, 00:16:43.316 "supported_io_types": { 00:16:43.316 "read": true, 00:16:43.316 "write": true, 00:16:43.316 "unmap": true, 00:16:43.316 "flush": true, 00:16:43.316 "reset": true, 00:16:43.316 "nvme_admin": false, 00:16:43.316 "nvme_io": false, 00:16:43.316 "nvme_io_md": false, 00:16:43.316 "write_zeroes": true, 00:16:43.316 "zcopy": true, 00:16:43.316 "get_zone_info": false, 00:16:43.316 "zone_management": false, 00:16:43.316 "zone_append": false, 00:16:43.316 "compare": false, 00:16:43.316 "compare_and_write": false, 00:16:43.316 "abort": true, 00:16:43.316 "seek_hole": false, 00:16:43.316 "seek_data": false, 00:16:43.316 "copy": true, 00:16:43.316 "nvme_iov_md": false 00:16:43.316 }, 00:16:43.316 "memory_domains": [ 00:16:43.316 { 00:16:43.316 "dma_device_id": "system", 00:16:43.316 "dma_device_type": 1 00:16:43.316 }, 00:16:43.316 { 00:16:43.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.316 "dma_device_type": 2 00:16:43.316 } 00:16:43.316 ], 00:16:43.316 "driver_specific": {} 00:16:43.316 } 00:16:43.316 ] 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.316 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.575 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.575 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.575 "name": "Existed_Raid", 00:16:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.575 "strip_size_kb": 64, 00:16:43.575 "state": "configuring", 00:16:43.575 "raid_level": "raid5f", 00:16:43.575 "superblock": false, 00:16:43.575 "num_base_bdevs": 4, 00:16:43.575 "num_base_bdevs_discovered": 2, 00:16:43.575 "num_base_bdevs_operational": 4, 00:16:43.575 "base_bdevs_list": [ 00:16:43.575 { 00:16:43.575 "name": "BaseBdev1", 00:16:43.575 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:43.575 "is_configured": true, 00:16:43.575 "data_offset": 0, 00:16:43.575 "data_size": 65536 00:16:43.575 }, 00:16:43.575 { 00:16:43.575 "name": "BaseBdev2", 00:16:43.575 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:43.575 "is_configured": true, 00:16:43.575 "data_offset": 0, 00:16:43.575 "data_size": 65536 00:16:43.575 }, 00:16:43.575 { 00:16:43.575 "name": "BaseBdev3", 00:16:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.575 "is_configured": false, 00:16:43.575 "data_offset": 0, 00:16:43.575 "data_size": 0 00:16:43.575 }, 00:16:43.575 { 00:16:43.575 "name": "BaseBdev4", 00:16:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.575 "is_configured": false, 00:16:43.576 "data_offset": 0, 00:16:43.576 "data_size": 0 00:16:43.576 } 00:16:43.576 ] 00:16:43.576 }' 00:16:43.576 11:49:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.576 11:49:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 [2024-11-04 11:49:09.279612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.835 BaseBdev3 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 [ 00:16:43.835 { 00:16:43.835 "name": "BaseBdev3", 00:16:43.835 "aliases": [ 00:16:43.835 "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6" 00:16:43.835 ], 00:16:43.835 "product_name": "Malloc disk", 00:16:43.835 "block_size": 512, 00:16:43.835 "num_blocks": 65536, 00:16:43.835 "uuid": "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6", 00:16:43.835 "assigned_rate_limits": { 00:16:43.835 "rw_ios_per_sec": 0, 00:16:43.835 "rw_mbytes_per_sec": 0, 00:16:43.835 "r_mbytes_per_sec": 0, 00:16:43.835 "w_mbytes_per_sec": 0 00:16:43.835 }, 00:16:43.835 "claimed": true, 00:16:43.835 "claim_type": "exclusive_write", 00:16:43.835 "zoned": false, 00:16:43.835 "supported_io_types": { 00:16:43.835 "read": true, 00:16:43.835 "write": true, 00:16:43.835 "unmap": true, 00:16:43.835 "flush": true, 00:16:43.835 "reset": true, 00:16:43.835 "nvme_admin": false, 00:16:43.835 "nvme_io": false, 00:16:43.835 "nvme_io_md": false, 00:16:43.835 "write_zeroes": true, 00:16:43.835 "zcopy": true, 00:16:43.835 "get_zone_info": false, 00:16:43.835 "zone_management": false, 00:16:43.835 "zone_append": false, 00:16:43.835 "compare": false, 00:16:43.835 "compare_and_write": false, 00:16:43.835 "abort": true, 00:16:43.835 "seek_hole": false, 00:16:43.835 "seek_data": false, 00:16:43.835 "copy": true, 00:16:43.835 "nvme_iov_md": false 00:16:43.835 }, 00:16:43.835 "memory_domains": [ 00:16:43.835 { 00:16:43.835 "dma_device_id": "system", 00:16:43.835 "dma_device_type": 1 00:16:43.835 }, 00:16:43.835 { 00:16:43.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.835 "dma_device_type": 2 00:16:43.835 } 00:16:43.835 ], 00:16:43.835 "driver_specific": {} 00:16:43.835 } 00:16:43.835 ] 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.095 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.095 "name": "Existed_Raid", 00:16:44.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.095 "strip_size_kb": 64, 00:16:44.095 "state": "configuring", 00:16:44.095 "raid_level": "raid5f", 00:16:44.095 "superblock": false, 00:16:44.095 "num_base_bdevs": 4, 00:16:44.095 "num_base_bdevs_discovered": 3, 00:16:44.095 "num_base_bdevs_operational": 4, 00:16:44.095 "base_bdevs_list": [ 00:16:44.095 { 00:16:44.095 "name": "BaseBdev1", 00:16:44.095 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:44.095 "is_configured": true, 00:16:44.095 "data_offset": 0, 00:16:44.095 "data_size": 65536 00:16:44.095 }, 00:16:44.095 { 00:16:44.095 "name": "BaseBdev2", 00:16:44.095 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:44.095 "is_configured": true, 00:16:44.095 "data_offset": 0, 00:16:44.095 "data_size": 65536 00:16:44.095 }, 00:16:44.095 { 00:16:44.095 "name": "BaseBdev3", 00:16:44.095 "uuid": "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6", 00:16:44.095 "is_configured": true, 00:16:44.095 "data_offset": 0, 00:16:44.095 "data_size": 65536 00:16:44.095 }, 00:16:44.095 { 00:16:44.095 "name": "BaseBdev4", 00:16:44.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.095 "is_configured": false, 00:16:44.095 "data_offset": 0, 00:16:44.095 "data_size": 0 00:16:44.095 } 00:16:44.095 ] 00:16:44.095 }' 00:16:44.095 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.095 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.354 [2024-11-04 11:49:09.787738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.354 [2024-11-04 11:49:09.787909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:44.354 [2024-11-04 11:49:09.787939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:44.354 [2024-11-04 11:49:09.788274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:44.354 [2024-11-04 11:49:09.796293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:44.354 [2024-11-04 11:49:09.796319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:44.354 [2024-11-04 11:49:09.796614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.354 BaseBdev4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.354 [ 00:16:44.354 { 00:16:44.354 "name": "BaseBdev4", 00:16:44.354 "aliases": [ 00:16:44.354 "1a5d20bc-018b-4af7-a842-05497e7acae3" 00:16:44.354 ], 00:16:44.354 "product_name": "Malloc disk", 00:16:44.354 "block_size": 512, 00:16:44.354 "num_blocks": 65536, 00:16:44.354 "uuid": "1a5d20bc-018b-4af7-a842-05497e7acae3", 00:16:44.354 "assigned_rate_limits": { 00:16:44.354 "rw_ios_per_sec": 0, 00:16:44.354 "rw_mbytes_per_sec": 0, 00:16:44.354 "r_mbytes_per_sec": 0, 00:16:44.354 "w_mbytes_per_sec": 0 00:16:44.354 }, 00:16:44.354 "claimed": true, 00:16:44.354 "claim_type": "exclusive_write", 00:16:44.354 "zoned": false, 00:16:44.354 "supported_io_types": { 00:16:44.354 "read": true, 00:16:44.354 "write": true, 00:16:44.354 "unmap": true, 00:16:44.354 "flush": true, 00:16:44.354 "reset": true, 00:16:44.354 "nvme_admin": false, 00:16:44.354 "nvme_io": false, 00:16:44.354 "nvme_io_md": false, 00:16:44.354 "write_zeroes": true, 00:16:44.354 "zcopy": true, 00:16:44.354 "get_zone_info": false, 00:16:44.354 "zone_management": false, 00:16:44.354 "zone_append": false, 00:16:44.354 "compare": false, 00:16:44.354 "compare_and_write": false, 00:16:44.354 "abort": true, 00:16:44.354 "seek_hole": false, 00:16:44.354 "seek_data": false, 00:16:44.354 "copy": true, 00:16:44.354 "nvme_iov_md": false 00:16:44.354 }, 00:16:44.354 "memory_domains": [ 00:16:44.354 { 00:16:44.354 "dma_device_id": "system", 00:16:44.354 "dma_device_type": 1 00:16:44.354 }, 00:16:44.354 { 00:16:44.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.354 "dma_device_type": 2 00:16:44.354 } 00:16:44.354 ], 00:16:44.354 "driver_specific": {} 00:16:44.354 } 00:16:44.354 ] 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.354 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.613 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.613 "name": "Existed_Raid", 00:16:44.613 "uuid": "fc79cae5-6712-407f-9579-2e4f43ab0492", 00:16:44.613 "strip_size_kb": 64, 00:16:44.613 "state": "online", 00:16:44.613 "raid_level": "raid5f", 00:16:44.613 "superblock": false, 00:16:44.613 "num_base_bdevs": 4, 00:16:44.613 "num_base_bdevs_discovered": 4, 00:16:44.613 "num_base_bdevs_operational": 4, 00:16:44.613 "base_bdevs_list": [ 00:16:44.613 { 00:16:44.613 "name": "BaseBdev1", 00:16:44.613 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:44.613 "is_configured": true, 00:16:44.613 "data_offset": 0, 00:16:44.613 "data_size": 65536 00:16:44.613 }, 00:16:44.613 { 00:16:44.613 "name": "BaseBdev2", 00:16:44.613 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:44.613 "is_configured": true, 00:16:44.613 "data_offset": 0, 00:16:44.613 "data_size": 65536 00:16:44.613 }, 00:16:44.613 { 00:16:44.613 "name": "BaseBdev3", 00:16:44.613 "uuid": "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6", 00:16:44.613 "is_configured": true, 00:16:44.613 "data_offset": 0, 00:16:44.613 "data_size": 65536 00:16:44.613 }, 00:16:44.613 { 00:16:44.613 "name": "BaseBdev4", 00:16:44.613 "uuid": "1a5d20bc-018b-4af7-a842-05497e7acae3", 00:16:44.613 "is_configured": true, 00:16:44.613 "data_offset": 0, 00:16:44.613 "data_size": 65536 00:16:44.613 } 00:16:44.613 ] 00:16:44.613 }' 00:16:44.613 11:49:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.613 11:49:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.873 [2024-11-04 11:49:10.273081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.873 "name": "Existed_Raid", 00:16:44.873 "aliases": [ 00:16:44.873 "fc79cae5-6712-407f-9579-2e4f43ab0492" 00:16:44.873 ], 00:16:44.873 "product_name": "Raid Volume", 00:16:44.873 "block_size": 512, 00:16:44.873 "num_blocks": 196608, 00:16:44.873 "uuid": "fc79cae5-6712-407f-9579-2e4f43ab0492", 00:16:44.873 "assigned_rate_limits": { 00:16:44.873 "rw_ios_per_sec": 0, 00:16:44.873 "rw_mbytes_per_sec": 0, 00:16:44.873 "r_mbytes_per_sec": 0, 00:16:44.873 "w_mbytes_per_sec": 0 00:16:44.873 }, 00:16:44.873 "claimed": false, 00:16:44.873 "zoned": false, 00:16:44.873 "supported_io_types": { 00:16:44.873 "read": true, 00:16:44.873 "write": true, 00:16:44.873 "unmap": false, 00:16:44.873 "flush": false, 00:16:44.873 "reset": true, 00:16:44.873 "nvme_admin": false, 00:16:44.873 "nvme_io": false, 00:16:44.873 "nvme_io_md": false, 00:16:44.873 "write_zeroes": true, 00:16:44.873 "zcopy": false, 00:16:44.873 "get_zone_info": false, 00:16:44.873 "zone_management": false, 00:16:44.873 "zone_append": false, 00:16:44.873 "compare": false, 00:16:44.873 "compare_and_write": false, 00:16:44.873 "abort": false, 00:16:44.873 "seek_hole": false, 00:16:44.873 "seek_data": false, 00:16:44.873 "copy": false, 00:16:44.873 "nvme_iov_md": false 00:16:44.873 }, 00:16:44.873 "driver_specific": { 00:16:44.873 "raid": { 00:16:44.873 "uuid": "fc79cae5-6712-407f-9579-2e4f43ab0492", 00:16:44.873 "strip_size_kb": 64, 00:16:44.873 "state": "online", 00:16:44.873 "raid_level": "raid5f", 00:16:44.873 "superblock": false, 00:16:44.873 "num_base_bdevs": 4, 00:16:44.873 "num_base_bdevs_discovered": 4, 00:16:44.873 "num_base_bdevs_operational": 4, 00:16:44.873 "base_bdevs_list": [ 00:16:44.873 { 00:16:44.873 "name": "BaseBdev1", 00:16:44.873 "uuid": "9b35fb57-4f6d-42e6-a865-8dde0bc5b121", 00:16:44.873 "is_configured": true, 00:16:44.873 "data_offset": 0, 00:16:44.873 "data_size": 65536 00:16:44.873 }, 00:16:44.873 { 00:16:44.873 "name": "BaseBdev2", 00:16:44.873 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:44.873 "is_configured": true, 00:16:44.873 "data_offset": 0, 00:16:44.873 "data_size": 65536 00:16:44.873 }, 00:16:44.873 { 00:16:44.873 "name": "BaseBdev3", 00:16:44.873 "uuid": "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6", 00:16:44.873 "is_configured": true, 00:16:44.873 "data_offset": 0, 00:16:44.873 "data_size": 65536 00:16:44.873 }, 00:16:44.873 { 00:16:44.873 "name": "BaseBdev4", 00:16:44.873 "uuid": "1a5d20bc-018b-4af7-a842-05497e7acae3", 00:16:44.873 "is_configured": true, 00:16:44.873 "data_offset": 0, 00:16:44.873 "data_size": 65536 00:16:44.873 } 00:16:44.873 ] 00:16:44.873 } 00:16:44.873 } 00:16:44.873 }' 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:44.873 BaseBdev2 00:16:44.873 BaseBdev3 00:16:44.873 BaseBdev4' 00:16:44.873 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.133 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.133 [2024-11-04 11:49:10.624409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.392 "name": "Existed_Raid", 00:16:45.392 "uuid": "fc79cae5-6712-407f-9579-2e4f43ab0492", 00:16:45.392 "strip_size_kb": 64, 00:16:45.392 "state": "online", 00:16:45.392 "raid_level": "raid5f", 00:16:45.392 "superblock": false, 00:16:45.392 "num_base_bdevs": 4, 00:16:45.392 "num_base_bdevs_discovered": 3, 00:16:45.392 "num_base_bdevs_operational": 3, 00:16:45.392 "base_bdevs_list": [ 00:16:45.392 { 00:16:45.392 "name": null, 00:16:45.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.392 "is_configured": false, 00:16:45.392 "data_offset": 0, 00:16:45.392 "data_size": 65536 00:16:45.392 }, 00:16:45.392 { 00:16:45.392 "name": "BaseBdev2", 00:16:45.392 "uuid": "f79c87ad-72bd-4f56-ba98-6246729f0c80", 00:16:45.392 "is_configured": true, 00:16:45.392 "data_offset": 0, 00:16:45.392 "data_size": 65536 00:16:45.392 }, 00:16:45.392 { 00:16:45.392 "name": "BaseBdev3", 00:16:45.392 "uuid": "bef3f3b0-ce29-44a8-a7a8-88e419f7c4d6", 00:16:45.392 "is_configured": true, 00:16:45.392 "data_offset": 0, 00:16:45.392 "data_size": 65536 00:16:45.392 }, 00:16:45.392 { 00:16:45.392 "name": "BaseBdev4", 00:16:45.392 "uuid": "1a5d20bc-018b-4af7-a842-05497e7acae3", 00:16:45.392 "is_configured": true, 00:16:45.392 "data_offset": 0, 00:16:45.392 "data_size": 65536 00:16:45.392 } 00:16:45.392 ] 00:16:45.392 }' 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.392 11:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.718 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.980 [2024-11-04 11:49:11.258766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.980 [2024-11-04 11:49:11.258944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.980 [2024-11-04 11:49:11.362786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.980 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.980 [2024-11-04 11:49:11.410735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.240 [2024-11-04 11:49:11.562193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:46.240 [2024-11-04 11:49:11.562290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.240 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.241 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 BaseBdev2 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 [ 00:16:46.501 { 00:16:46.501 "name": "BaseBdev2", 00:16:46.501 "aliases": [ 00:16:46.501 "6394c04a-fd9a-466d-afac-6bb1fa686712" 00:16:46.501 ], 00:16:46.501 "product_name": "Malloc disk", 00:16:46.501 "block_size": 512, 00:16:46.501 "num_blocks": 65536, 00:16:46.501 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:46.501 "assigned_rate_limits": { 00:16:46.501 "rw_ios_per_sec": 0, 00:16:46.501 "rw_mbytes_per_sec": 0, 00:16:46.501 "r_mbytes_per_sec": 0, 00:16:46.501 "w_mbytes_per_sec": 0 00:16:46.501 }, 00:16:46.501 "claimed": false, 00:16:46.501 "zoned": false, 00:16:46.501 "supported_io_types": { 00:16:46.501 "read": true, 00:16:46.501 "write": true, 00:16:46.501 "unmap": true, 00:16:46.501 "flush": true, 00:16:46.501 "reset": true, 00:16:46.501 "nvme_admin": false, 00:16:46.501 "nvme_io": false, 00:16:46.501 "nvme_io_md": false, 00:16:46.501 "write_zeroes": true, 00:16:46.501 "zcopy": true, 00:16:46.501 "get_zone_info": false, 00:16:46.501 "zone_management": false, 00:16:46.501 "zone_append": false, 00:16:46.501 "compare": false, 00:16:46.501 "compare_and_write": false, 00:16:46.501 "abort": true, 00:16:46.501 "seek_hole": false, 00:16:46.501 "seek_data": false, 00:16:46.501 "copy": true, 00:16:46.501 "nvme_iov_md": false 00:16:46.501 }, 00:16:46.501 "memory_domains": [ 00:16:46.501 { 00:16:46.501 "dma_device_id": "system", 00:16:46.501 "dma_device_type": 1 00:16:46.501 }, 00:16:46.501 { 00:16:46.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.501 "dma_device_type": 2 00:16:46.501 } 00:16:46.501 ], 00:16:46.501 "driver_specific": {} 00:16:46.501 } 00:16:46.501 ] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 BaseBdev3 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.501 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.501 [ 00:16:46.501 { 00:16:46.501 "name": "BaseBdev3", 00:16:46.501 "aliases": [ 00:16:46.501 "ed273edb-36e9-4080-a1c8-ef49b8488c39" 00:16:46.501 ], 00:16:46.501 "product_name": "Malloc disk", 00:16:46.501 "block_size": 512, 00:16:46.501 "num_blocks": 65536, 00:16:46.501 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:46.501 "assigned_rate_limits": { 00:16:46.501 "rw_ios_per_sec": 0, 00:16:46.501 "rw_mbytes_per_sec": 0, 00:16:46.501 "r_mbytes_per_sec": 0, 00:16:46.501 "w_mbytes_per_sec": 0 00:16:46.501 }, 00:16:46.501 "claimed": false, 00:16:46.501 "zoned": false, 00:16:46.501 "supported_io_types": { 00:16:46.501 "read": true, 00:16:46.501 "write": true, 00:16:46.501 "unmap": true, 00:16:46.501 "flush": true, 00:16:46.501 "reset": true, 00:16:46.501 "nvme_admin": false, 00:16:46.501 "nvme_io": false, 00:16:46.501 "nvme_io_md": false, 00:16:46.501 "write_zeroes": true, 00:16:46.501 "zcopy": true, 00:16:46.501 "get_zone_info": false, 00:16:46.501 "zone_management": false, 00:16:46.501 "zone_append": false, 00:16:46.501 "compare": false, 00:16:46.501 "compare_and_write": false, 00:16:46.501 "abort": true, 00:16:46.501 "seek_hole": false, 00:16:46.501 "seek_data": false, 00:16:46.501 "copy": true, 00:16:46.501 "nvme_iov_md": false 00:16:46.501 }, 00:16:46.501 "memory_domains": [ 00:16:46.501 { 00:16:46.501 "dma_device_id": "system", 00:16:46.501 "dma_device_type": 1 00:16:46.501 }, 00:16:46.501 { 00:16:46.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.501 "dma_device_type": 2 00:16:46.501 } 00:16:46.501 ], 00:16:46.502 "driver_specific": {} 00:16:46.502 } 00:16:46.502 ] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 BaseBdev4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 [ 00:16:46.502 { 00:16:46.502 "name": "BaseBdev4", 00:16:46.502 "aliases": [ 00:16:46.502 "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f" 00:16:46.502 ], 00:16:46.502 "product_name": "Malloc disk", 00:16:46.502 "block_size": 512, 00:16:46.502 "num_blocks": 65536, 00:16:46.502 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:46.502 "assigned_rate_limits": { 00:16:46.502 "rw_ios_per_sec": 0, 00:16:46.502 "rw_mbytes_per_sec": 0, 00:16:46.502 "r_mbytes_per_sec": 0, 00:16:46.502 "w_mbytes_per_sec": 0 00:16:46.502 }, 00:16:46.502 "claimed": false, 00:16:46.502 "zoned": false, 00:16:46.502 "supported_io_types": { 00:16:46.502 "read": true, 00:16:46.502 "write": true, 00:16:46.502 "unmap": true, 00:16:46.502 "flush": true, 00:16:46.502 "reset": true, 00:16:46.502 "nvme_admin": false, 00:16:46.502 "nvme_io": false, 00:16:46.502 "nvme_io_md": false, 00:16:46.502 "write_zeroes": true, 00:16:46.502 "zcopy": true, 00:16:46.502 "get_zone_info": false, 00:16:46.502 "zone_management": false, 00:16:46.502 "zone_append": false, 00:16:46.502 "compare": false, 00:16:46.502 "compare_and_write": false, 00:16:46.502 "abort": true, 00:16:46.502 "seek_hole": false, 00:16:46.502 "seek_data": false, 00:16:46.502 "copy": true, 00:16:46.502 "nvme_iov_md": false 00:16:46.502 }, 00:16:46.502 "memory_domains": [ 00:16:46.502 { 00:16:46.502 "dma_device_id": "system", 00:16:46.502 "dma_device_type": 1 00:16:46.502 }, 00:16:46.502 { 00:16:46.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.502 "dma_device_type": 2 00:16:46.502 } 00:16:46.502 ], 00:16:46.502 "driver_specific": {} 00:16:46.502 } 00:16:46.502 ] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 [2024-11-04 11:49:11.974925] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.502 [2024-11-04 11:49:11.975015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.502 [2024-11-04 11:49:11.975074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.502 [2024-11-04 11:49:11.977061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:46.502 [2024-11-04 11:49:11.977164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.502 11:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.502 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.762 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.762 "name": "Existed_Raid", 00:16:46.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.762 "strip_size_kb": 64, 00:16:46.762 "state": "configuring", 00:16:46.762 "raid_level": "raid5f", 00:16:46.762 "superblock": false, 00:16:46.762 "num_base_bdevs": 4, 00:16:46.762 "num_base_bdevs_discovered": 3, 00:16:46.762 "num_base_bdevs_operational": 4, 00:16:46.762 "base_bdevs_list": [ 00:16:46.762 { 00:16:46.762 "name": "BaseBdev1", 00:16:46.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.762 "is_configured": false, 00:16:46.762 "data_offset": 0, 00:16:46.762 "data_size": 0 00:16:46.762 }, 00:16:46.762 { 00:16:46.762 "name": "BaseBdev2", 00:16:46.762 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:46.762 "is_configured": true, 00:16:46.762 "data_offset": 0, 00:16:46.762 "data_size": 65536 00:16:46.762 }, 00:16:46.762 { 00:16:46.762 "name": "BaseBdev3", 00:16:46.762 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:46.762 "is_configured": true, 00:16:46.762 "data_offset": 0, 00:16:46.762 "data_size": 65536 00:16:46.762 }, 00:16:46.762 { 00:16:46.762 "name": "BaseBdev4", 00:16:46.762 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:46.762 "is_configured": true, 00:16:46.762 "data_offset": 0, 00:16:46.762 "data_size": 65536 00:16:46.762 } 00:16:46.762 ] 00:16:46.762 }' 00:16:46.762 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.762 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.021 [2024-11-04 11:49:12.414232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.021 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.021 "name": "Existed_Raid", 00:16:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.021 "strip_size_kb": 64, 00:16:47.021 "state": "configuring", 00:16:47.021 "raid_level": "raid5f", 00:16:47.021 "superblock": false, 00:16:47.021 "num_base_bdevs": 4, 00:16:47.021 "num_base_bdevs_discovered": 2, 00:16:47.021 "num_base_bdevs_operational": 4, 00:16:47.021 "base_bdevs_list": [ 00:16:47.021 { 00:16:47.021 "name": "BaseBdev1", 00:16:47.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.021 "is_configured": false, 00:16:47.021 "data_offset": 0, 00:16:47.021 "data_size": 0 00:16:47.021 }, 00:16:47.021 { 00:16:47.021 "name": null, 00:16:47.021 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:47.021 "is_configured": false, 00:16:47.021 "data_offset": 0, 00:16:47.021 "data_size": 65536 00:16:47.021 }, 00:16:47.021 { 00:16:47.021 "name": "BaseBdev3", 00:16:47.021 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:47.021 "is_configured": true, 00:16:47.021 "data_offset": 0, 00:16:47.021 "data_size": 65536 00:16:47.021 }, 00:16:47.022 { 00:16:47.022 "name": "BaseBdev4", 00:16:47.022 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:47.022 "is_configured": true, 00:16:47.022 "data_offset": 0, 00:16:47.022 "data_size": 65536 00:16:47.022 } 00:16:47.022 ] 00:16:47.022 }' 00:16:47.022 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.022 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 [2024-11-04 11:49:12.957473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.592 BaseBdev1 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.592 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.592 [ 00:16:47.592 { 00:16:47.592 "name": "BaseBdev1", 00:16:47.592 "aliases": [ 00:16:47.592 "4a433500-d769-40ea-8011-f98ee4c8504f" 00:16:47.592 ], 00:16:47.592 "product_name": "Malloc disk", 00:16:47.592 "block_size": 512, 00:16:47.592 "num_blocks": 65536, 00:16:47.592 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:47.592 "assigned_rate_limits": { 00:16:47.592 "rw_ios_per_sec": 0, 00:16:47.592 "rw_mbytes_per_sec": 0, 00:16:47.592 "r_mbytes_per_sec": 0, 00:16:47.592 "w_mbytes_per_sec": 0 00:16:47.592 }, 00:16:47.592 "claimed": true, 00:16:47.592 "claim_type": "exclusive_write", 00:16:47.592 "zoned": false, 00:16:47.593 "supported_io_types": { 00:16:47.593 "read": true, 00:16:47.593 "write": true, 00:16:47.593 "unmap": true, 00:16:47.593 "flush": true, 00:16:47.593 "reset": true, 00:16:47.593 "nvme_admin": false, 00:16:47.593 "nvme_io": false, 00:16:47.593 "nvme_io_md": false, 00:16:47.593 "write_zeroes": true, 00:16:47.593 "zcopy": true, 00:16:47.593 "get_zone_info": false, 00:16:47.593 "zone_management": false, 00:16:47.593 "zone_append": false, 00:16:47.593 "compare": false, 00:16:47.593 "compare_and_write": false, 00:16:47.593 "abort": true, 00:16:47.593 "seek_hole": false, 00:16:47.593 "seek_data": false, 00:16:47.593 "copy": true, 00:16:47.593 "nvme_iov_md": false 00:16:47.593 }, 00:16:47.593 "memory_domains": [ 00:16:47.593 { 00:16:47.593 "dma_device_id": "system", 00:16:47.593 "dma_device_type": 1 00:16:47.593 }, 00:16:47.593 { 00:16:47.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.593 "dma_device_type": 2 00:16:47.593 } 00:16:47.593 ], 00:16:47.593 "driver_specific": {} 00:16:47.593 } 00:16:47.593 ] 00:16:47.593 11:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.593 "name": "Existed_Raid", 00:16:47.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.593 "strip_size_kb": 64, 00:16:47.593 "state": "configuring", 00:16:47.593 "raid_level": "raid5f", 00:16:47.593 "superblock": false, 00:16:47.593 "num_base_bdevs": 4, 00:16:47.593 "num_base_bdevs_discovered": 3, 00:16:47.593 "num_base_bdevs_operational": 4, 00:16:47.593 "base_bdevs_list": [ 00:16:47.593 { 00:16:47.593 "name": "BaseBdev1", 00:16:47.593 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:47.593 "is_configured": true, 00:16:47.593 "data_offset": 0, 00:16:47.593 "data_size": 65536 00:16:47.593 }, 00:16:47.593 { 00:16:47.593 "name": null, 00:16:47.593 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:47.593 "is_configured": false, 00:16:47.593 "data_offset": 0, 00:16:47.593 "data_size": 65536 00:16:47.593 }, 00:16:47.593 { 00:16:47.593 "name": "BaseBdev3", 00:16:47.593 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:47.593 "is_configured": true, 00:16:47.593 "data_offset": 0, 00:16:47.593 "data_size": 65536 00:16:47.593 }, 00:16:47.593 { 00:16:47.593 "name": "BaseBdev4", 00:16:47.593 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:47.593 "is_configured": true, 00:16:47.593 "data_offset": 0, 00:16:47.593 "data_size": 65536 00:16:47.593 } 00:16:47.593 ] 00:16:47.593 }' 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.593 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.161 [2024-11-04 11:49:13.464655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.161 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.161 "name": "Existed_Raid", 00:16:48.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.161 "strip_size_kb": 64, 00:16:48.161 "state": "configuring", 00:16:48.162 "raid_level": "raid5f", 00:16:48.162 "superblock": false, 00:16:48.162 "num_base_bdevs": 4, 00:16:48.162 "num_base_bdevs_discovered": 2, 00:16:48.162 "num_base_bdevs_operational": 4, 00:16:48.162 "base_bdevs_list": [ 00:16:48.162 { 00:16:48.162 "name": "BaseBdev1", 00:16:48.162 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:48.162 "is_configured": true, 00:16:48.162 "data_offset": 0, 00:16:48.162 "data_size": 65536 00:16:48.162 }, 00:16:48.162 { 00:16:48.162 "name": null, 00:16:48.162 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:48.162 "is_configured": false, 00:16:48.162 "data_offset": 0, 00:16:48.162 "data_size": 65536 00:16:48.162 }, 00:16:48.162 { 00:16:48.162 "name": null, 00:16:48.162 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:48.162 "is_configured": false, 00:16:48.162 "data_offset": 0, 00:16:48.162 "data_size": 65536 00:16:48.162 }, 00:16:48.162 { 00:16:48.162 "name": "BaseBdev4", 00:16:48.162 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:48.162 "is_configured": true, 00:16:48.162 "data_offset": 0, 00:16:48.162 "data_size": 65536 00:16:48.162 } 00:16:48.162 ] 00:16:48.162 }' 00:16:48.162 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.162 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.421 [2024-11-04 11:49:13.931879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.421 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.680 "name": "Existed_Raid", 00:16:48.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.680 "strip_size_kb": 64, 00:16:48.680 "state": "configuring", 00:16:48.680 "raid_level": "raid5f", 00:16:48.680 "superblock": false, 00:16:48.680 "num_base_bdevs": 4, 00:16:48.680 "num_base_bdevs_discovered": 3, 00:16:48.680 "num_base_bdevs_operational": 4, 00:16:48.680 "base_bdevs_list": [ 00:16:48.680 { 00:16:48.680 "name": "BaseBdev1", 00:16:48.680 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:48.680 "is_configured": true, 00:16:48.680 "data_offset": 0, 00:16:48.680 "data_size": 65536 00:16:48.680 }, 00:16:48.680 { 00:16:48.680 "name": null, 00:16:48.680 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:48.680 "is_configured": false, 00:16:48.680 "data_offset": 0, 00:16:48.680 "data_size": 65536 00:16:48.680 }, 00:16:48.680 { 00:16:48.680 "name": "BaseBdev3", 00:16:48.680 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:48.680 "is_configured": true, 00:16:48.680 "data_offset": 0, 00:16:48.680 "data_size": 65536 00:16:48.680 }, 00:16:48.680 { 00:16:48.680 "name": "BaseBdev4", 00:16:48.680 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:48.680 "is_configured": true, 00:16:48.680 "data_offset": 0, 00:16:48.680 "data_size": 65536 00:16:48.680 } 00:16:48.680 ] 00:16:48.680 }' 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.680 11:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.939 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.939 [2024-11-04 11:49:14.387159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.204 "name": "Existed_Raid", 00:16:49.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.204 "strip_size_kb": 64, 00:16:49.204 "state": "configuring", 00:16:49.204 "raid_level": "raid5f", 00:16:49.204 "superblock": false, 00:16:49.204 "num_base_bdevs": 4, 00:16:49.204 "num_base_bdevs_discovered": 2, 00:16:49.204 "num_base_bdevs_operational": 4, 00:16:49.204 "base_bdevs_list": [ 00:16:49.204 { 00:16:49.204 "name": null, 00:16:49.204 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:49.204 "is_configured": false, 00:16:49.204 "data_offset": 0, 00:16:49.204 "data_size": 65536 00:16:49.204 }, 00:16:49.204 { 00:16:49.204 "name": null, 00:16:49.204 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:49.204 "is_configured": false, 00:16:49.204 "data_offset": 0, 00:16:49.204 "data_size": 65536 00:16:49.204 }, 00:16:49.204 { 00:16:49.204 "name": "BaseBdev3", 00:16:49.204 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:49.204 "is_configured": true, 00:16:49.204 "data_offset": 0, 00:16:49.204 "data_size": 65536 00:16:49.204 }, 00:16:49.204 { 00:16:49.204 "name": "BaseBdev4", 00:16:49.204 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:49.204 "is_configured": true, 00:16:49.204 "data_offset": 0, 00:16:49.204 "data_size": 65536 00:16:49.204 } 00:16:49.204 ] 00:16:49.204 }' 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.204 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.469 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.469 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.469 11:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.469 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.730 11:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.730 [2024-11-04 11:49:15.015003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.730 "name": "Existed_Raid", 00:16:49.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.730 "strip_size_kb": 64, 00:16:49.730 "state": "configuring", 00:16:49.730 "raid_level": "raid5f", 00:16:49.730 "superblock": false, 00:16:49.730 "num_base_bdevs": 4, 00:16:49.730 "num_base_bdevs_discovered": 3, 00:16:49.730 "num_base_bdevs_operational": 4, 00:16:49.730 "base_bdevs_list": [ 00:16:49.730 { 00:16:49.730 "name": null, 00:16:49.730 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:49.730 "is_configured": false, 00:16:49.730 "data_offset": 0, 00:16:49.730 "data_size": 65536 00:16:49.730 }, 00:16:49.730 { 00:16:49.730 "name": "BaseBdev2", 00:16:49.730 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:49.730 "is_configured": true, 00:16:49.730 "data_offset": 0, 00:16:49.730 "data_size": 65536 00:16:49.730 }, 00:16:49.730 { 00:16:49.730 "name": "BaseBdev3", 00:16:49.730 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:49.730 "is_configured": true, 00:16:49.730 "data_offset": 0, 00:16:49.730 "data_size": 65536 00:16:49.730 }, 00:16:49.730 { 00:16:49.730 "name": "BaseBdev4", 00:16:49.730 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:49.730 "is_configured": true, 00:16:49.730 "data_offset": 0, 00:16:49.730 "data_size": 65536 00:16:49.730 } 00:16:49.730 ] 00:16:49.730 }' 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.730 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.990 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.990 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.990 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.990 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:49.990 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a433500-d769-40ea-8011-f98ee4c8504f 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.250 [2024-11-04 11:49:15.623613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.250 [2024-11-04 11:49:15.623744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:50.250 [2024-11-04 11:49:15.623768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:50.250 [2024-11-04 11:49:15.624095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:50.250 [2024-11-04 11:49:15.631135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:50.250 [2024-11-04 11:49:15.631198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:50.250 [2024-11-04 11:49:15.631515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.250 NewBaseBdev 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.250 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.250 [ 00:16:50.250 { 00:16:50.250 "name": "NewBaseBdev", 00:16:50.250 "aliases": [ 00:16:50.250 "4a433500-d769-40ea-8011-f98ee4c8504f" 00:16:50.250 ], 00:16:50.250 "product_name": "Malloc disk", 00:16:50.250 "block_size": 512, 00:16:50.250 "num_blocks": 65536, 00:16:50.250 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:50.250 "assigned_rate_limits": { 00:16:50.250 "rw_ios_per_sec": 0, 00:16:50.250 "rw_mbytes_per_sec": 0, 00:16:50.250 "r_mbytes_per_sec": 0, 00:16:50.250 "w_mbytes_per_sec": 0 00:16:50.250 }, 00:16:50.250 "claimed": true, 00:16:50.250 "claim_type": "exclusive_write", 00:16:50.250 "zoned": false, 00:16:50.250 "supported_io_types": { 00:16:50.250 "read": true, 00:16:50.250 "write": true, 00:16:50.250 "unmap": true, 00:16:50.250 "flush": true, 00:16:50.250 "reset": true, 00:16:50.250 "nvme_admin": false, 00:16:50.250 "nvme_io": false, 00:16:50.250 "nvme_io_md": false, 00:16:50.250 "write_zeroes": true, 00:16:50.250 "zcopy": true, 00:16:50.250 "get_zone_info": false, 00:16:50.250 "zone_management": false, 00:16:50.250 "zone_append": false, 00:16:50.250 "compare": false, 00:16:50.250 "compare_and_write": false, 00:16:50.250 "abort": true, 00:16:50.250 "seek_hole": false, 00:16:50.250 "seek_data": false, 00:16:50.250 "copy": true, 00:16:50.250 "nvme_iov_md": false 00:16:50.250 }, 00:16:50.250 "memory_domains": [ 00:16:50.250 { 00:16:50.251 "dma_device_id": "system", 00:16:50.251 "dma_device_type": 1 00:16:50.251 }, 00:16:50.251 { 00:16:50.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.251 "dma_device_type": 2 00:16:50.251 } 00:16:50.251 ], 00:16:50.251 "driver_specific": {} 00:16:50.251 } 00:16:50.251 ] 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.251 "name": "Existed_Raid", 00:16:50.251 "uuid": "8b382b41-e431-43e4-b53a-29f3792c3e19", 00:16:50.251 "strip_size_kb": 64, 00:16:50.251 "state": "online", 00:16:50.251 "raid_level": "raid5f", 00:16:50.251 "superblock": false, 00:16:50.251 "num_base_bdevs": 4, 00:16:50.251 "num_base_bdevs_discovered": 4, 00:16:50.251 "num_base_bdevs_operational": 4, 00:16:50.251 "base_bdevs_list": [ 00:16:50.251 { 00:16:50.251 "name": "NewBaseBdev", 00:16:50.251 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:50.251 "is_configured": true, 00:16:50.251 "data_offset": 0, 00:16:50.251 "data_size": 65536 00:16:50.251 }, 00:16:50.251 { 00:16:50.251 "name": "BaseBdev2", 00:16:50.251 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:50.251 "is_configured": true, 00:16:50.251 "data_offset": 0, 00:16:50.251 "data_size": 65536 00:16:50.251 }, 00:16:50.251 { 00:16:50.251 "name": "BaseBdev3", 00:16:50.251 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:50.251 "is_configured": true, 00:16:50.251 "data_offset": 0, 00:16:50.251 "data_size": 65536 00:16:50.251 }, 00:16:50.251 { 00:16:50.251 "name": "BaseBdev4", 00:16:50.251 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:50.251 "is_configured": true, 00:16:50.251 "data_offset": 0, 00:16:50.251 "data_size": 65536 00:16:50.251 } 00:16:50.251 ] 00:16:50.251 }' 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.251 11:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.820 [2024-11-04 11:49:16.115292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.820 "name": "Existed_Raid", 00:16:50.820 "aliases": [ 00:16:50.820 "8b382b41-e431-43e4-b53a-29f3792c3e19" 00:16:50.820 ], 00:16:50.820 "product_name": "Raid Volume", 00:16:50.820 "block_size": 512, 00:16:50.820 "num_blocks": 196608, 00:16:50.820 "uuid": "8b382b41-e431-43e4-b53a-29f3792c3e19", 00:16:50.820 "assigned_rate_limits": { 00:16:50.820 "rw_ios_per_sec": 0, 00:16:50.820 "rw_mbytes_per_sec": 0, 00:16:50.820 "r_mbytes_per_sec": 0, 00:16:50.820 "w_mbytes_per_sec": 0 00:16:50.820 }, 00:16:50.820 "claimed": false, 00:16:50.820 "zoned": false, 00:16:50.820 "supported_io_types": { 00:16:50.820 "read": true, 00:16:50.820 "write": true, 00:16:50.820 "unmap": false, 00:16:50.820 "flush": false, 00:16:50.820 "reset": true, 00:16:50.820 "nvme_admin": false, 00:16:50.820 "nvme_io": false, 00:16:50.820 "nvme_io_md": false, 00:16:50.820 "write_zeroes": true, 00:16:50.820 "zcopy": false, 00:16:50.820 "get_zone_info": false, 00:16:50.820 "zone_management": false, 00:16:50.820 "zone_append": false, 00:16:50.820 "compare": false, 00:16:50.820 "compare_and_write": false, 00:16:50.820 "abort": false, 00:16:50.820 "seek_hole": false, 00:16:50.820 "seek_data": false, 00:16:50.820 "copy": false, 00:16:50.820 "nvme_iov_md": false 00:16:50.820 }, 00:16:50.820 "driver_specific": { 00:16:50.820 "raid": { 00:16:50.820 "uuid": "8b382b41-e431-43e4-b53a-29f3792c3e19", 00:16:50.820 "strip_size_kb": 64, 00:16:50.820 "state": "online", 00:16:50.820 "raid_level": "raid5f", 00:16:50.820 "superblock": false, 00:16:50.820 "num_base_bdevs": 4, 00:16:50.820 "num_base_bdevs_discovered": 4, 00:16:50.820 "num_base_bdevs_operational": 4, 00:16:50.820 "base_bdevs_list": [ 00:16:50.820 { 00:16:50.820 "name": "NewBaseBdev", 00:16:50.820 "uuid": "4a433500-d769-40ea-8011-f98ee4c8504f", 00:16:50.820 "is_configured": true, 00:16:50.820 "data_offset": 0, 00:16:50.820 "data_size": 65536 00:16:50.820 }, 00:16:50.820 { 00:16:50.820 "name": "BaseBdev2", 00:16:50.820 "uuid": "6394c04a-fd9a-466d-afac-6bb1fa686712", 00:16:50.820 "is_configured": true, 00:16:50.820 "data_offset": 0, 00:16:50.820 "data_size": 65536 00:16:50.820 }, 00:16:50.820 { 00:16:50.820 "name": "BaseBdev3", 00:16:50.820 "uuid": "ed273edb-36e9-4080-a1c8-ef49b8488c39", 00:16:50.820 "is_configured": true, 00:16:50.820 "data_offset": 0, 00:16:50.820 "data_size": 65536 00:16:50.820 }, 00:16:50.820 { 00:16:50.820 "name": "BaseBdev4", 00:16:50.820 "uuid": "f4cd47e8-9ca5-4460-8545-bf0ffd81da8f", 00:16:50.820 "is_configured": true, 00:16:50.820 "data_offset": 0, 00:16:50.820 "data_size": 65536 00:16:50.820 } 00:16:50.820 ] 00:16:50.820 } 00:16:50.820 } 00:16:50.820 }' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:50.820 BaseBdev2 00:16:50.820 BaseBdev3 00:16:50.820 BaseBdev4' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.820 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.080 [2024-11-04 11:49:16.430515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.080 [2024-11-04 11:49:16.430591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.080 [2024-11-04 11:49:16.430700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.080 [2024-11-04 11:49:16.431071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.080 [2024-11-04 11:49:16.431133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83013 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83013 ']' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83013 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83013 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83013' 00:16:51.080 killing process with pid 83013 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83013 00:16:51.080 [2024-11-04 11:49:16.480199] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.080 11:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83013 00:16:51.649 [2024-11-04 11:49:16.887844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.589 ************************************ 00:16:52.589 END TEST raid5f_state_function_test 00:16:52.589 ************************************ 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:52.589 00:16:52.589 real 0m11.771s 00:16:52.589 user 0m18.652s 00:16:52.589 sys 0m2.147s 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.589 11:49:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:52.589 11:49:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:52.589 11:49:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:52.589 11:49:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.589 ************************************ 00:16:52.589 START TEST raid5f_state_function_test_sb 00:16:52.589 ************************************ 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:52.589 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83679 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83679' 00:16:52.848 Process raid pid: 83679 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83679 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83679 ']' 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:52.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:52.848 11:49:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.848 [2024-11-04 11:49:18.208928] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:16:52.849 [2024-11-04 11:49:18.209120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.108 [2024-11-04 11:49:18.386323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.108 [2024-11-04 11:49:18.504357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.368 [2024-11-04 11:49:18.716923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.368 [2024-11-04 11:49:18.717022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.629 [2024-11-04 11:49:19.082520] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.629 [2024-11-04 11:49:19.082613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.629 [2024-11-04 11:49:19.082643] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.629 [2024-11-04 11:49:19.082666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.629 [2024-11-04 11:49:19.082689] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.629 [2024-11-04 11:49:19.082710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.629 [2024-11-04 11:49:19.082744] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:53.629 [2024-11-04 11:49:19.082788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.629 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.629 "name": "Existed_Raid", 00:16:53.629 "uuid": "a2f8e58a-027f-46eb-910a-d374aafc0381", 00:16:53.629 "strip_size_kb": 64, 00:16:53.629 "state": "configuring", 00:16:53.629 "raid_level": "raid5f", 00:16:53.629 "superblock": true, 00:16:53.629 "num_base_bdevs": 4, 00:16:53.629 "num_base_bdevs_discovered": 0, 00:16:53.629 "num_base_bdevs_operational": 4, 00:16:53.629 "base_bdevs_list": [ 00:16:53.629 { 00:16:53.629 "name": "BaseBdev1", 00:16:53.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.629 "is_configured": false, 00:16:53.629 "data_offset": 0, 00:16:53.629 "data_size": 0 00:16:53.629 }, 00:16:53.629 { 00:16:53.629 "name": "BaseBdev2", 00:16:53.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.629 "is_configured": false, 00:16:53.629 "data_offset": 0, 00:16:53.629 "data_size": 0 00:16:53.630 }, 00:16:53.630 { 00:16:53.630 "name": "BaseBdev3", 00:16:53.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.630 "is_configured": false, 00:16:53.630 "data_offset": 0, 00:16:53.630 "data_size": 0 00:16:53.630 }, 00:16:53.630 { 00:16:53.630 "name": "BaseBdev4", 00:16:53.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.630 "is_configured": false, 00:16:53.630 "data_offset": 0, 00:16:53.630 "data_size": 0 00:16:53.630 } 00:16:53.630 ] 00:16:53.630 }' 00:16:53.630 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.630 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 [2024-11-04 11:49:19.525691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.219 [2024-11-04 11:49:19.525776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 [2024-11-04 11:49:19.537668] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.219 [2024-11-04 11:49:19.537744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.219 [2024-11-04 11:49:19.537771] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.219 [2024-11-04 11:49:19.537792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.219 [2024-11-04 11:49:19.537810] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.219 [2024-11-04 11:49:19.537830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.219 [2024-11-04 11:49:19.537846] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.219 [2024-11-04 11:49:19.537866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 [2024-11-04 11:49:19.585683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.219 BaseBdev1 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 [ 00:16:54.219 { 00:16:54.219 "name": "BaseBdev1", 00:16:54.219 "aliases": [ 00:16:54.219 "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf" 00:16:54.219 ], 00:16:54.219 "product_name": "Malloc disk", 00:16:54.219 "block_size": 512, 00:16:54.219 "num_blocks": 65536, 00:16:54.219 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:54.219 "assigned_rate_limits": { 00:16:54.219 "rw_ios_per_sec": 0, 00:16:54.219 "rw_mbytes_per_sec": 0, 00:16:54.219 "r_mbytes_per_sec": 0, 00:16:54.219 "w_mbytes_per_sec": 0 00:16:54.219 }, 00:16:54.219 "claimed": true, 00:16:54.219 "claim_type": "exclusive_write", 00:16:54.219 "zoned": false, 00:16:54.219 "supported_io_types": { 00:16:54.219 "read": true, 00:16:54.219 "write": true, 00:16:54.219 "unmap": true, 00:16:54.219 "flush": true, 00:16:54.219 "reset": true, 00:16:54.219 "nvme_admin": false, 00:16:54.219 "nvme_io": false, 00:16:54.219 "nvme_io_md": false, 00:16:54.219 "write_zeroes": true, 00:16:54.219 "zcopy": true, 00:16:54.219 "get_zone_info": false, 00:16:54.219 "zone_management": false, 00:16:54.219 "zone_append": false, 00:16:54.219 "compare": false, 00:16:54.219 "compare_and_write": false, 00:16:54.219 "abort": true, 00:16:54.219 "seek_hole": false, 00:16:54.219 "seek_data": false, 00:16:54.219 "copy": true, 00:16:54.219 "nvme_iov_md": false 00:16:54.219 }, 00:16:54.219 "memory_domains": [ 00:16:54.219 { 00:16:54.219 "dma_device_id": "system", 00:16:54.219 "dma_device_type": 1 00:16:54.219 }, 00:16:54.219 { 00:16:54.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.219 "dma_device_type": 2 00:16:54.219 } 00:16:54.219 ], 00:16:54.219 "driver_specific": {} 00:16:54.219 } 00:16:54.219 ] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.219 "name": "Existed_Raid", 00:16:54.219 "uuid": "10afb5f0-c0ad-4603-9dbf-207f6b81761e", 00:16:54.219 "strip_size_kb": 64, 00:16:54.219 "state": "configuring", 00:16:54.219 "raid_level": "raid5f", 00:16:54.219 "superblock": true, 00:16:54.219 "num_base_bdevs": 4, 00:16:54.219 "num_base_bdevs_discovered": 1, 00:16:54.219 "num_base_bdevs_operational": 4, 00:16:54.219 "base_bdevs_list": [ 00:16:54.219 { 00:16:54.219 "name": "BaseBdev1", 00:16:54.219 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:54.219 "is_configured": true, 00:16:54.219 "data_offset": 2048, 00:16:54.219 "data_size": 63488 00:16:54.219 }, 00:16:54.219 { 00:16:54.219 "name": "BaseBdev2", 00:16:54.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.219 "is_configured": false, 00:16:54.219 "data_offset": 0, 00:16:54.219 "data_size": 0 00:16:54.219 }, 00:16:54.219 { 00:16:54.219 "name": "BaseBdev3", 00:16:54.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.219 "is_configured": false, 00:16:54.219 "data_offset": 0, 00:16:54.219 "data_size": 0 00:16:54.219 }, 00:16:54.219 { 00:16:54.219 "name": "BaseBdev4", 00:16:54.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.219 "is_configured": false, 00:16:54.219 "data_offset": 0, 00:16:54.219 "data_size": 0 00:16:54.219 } 00:16:54.219 ] 00:16:54.219 }' 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.219 11:49:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.797 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.797 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 [2024-11-04 11:49:20.052929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.798 [2024-11-04 11:49:20.053051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.798 [2024-11-04 11:49:20.064971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.798 [2024-11-04 11:49:20.066910] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.798 [2024-11-04 11:49:20.066985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.798 [2024-11-04 11:49:20.067013] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.798 [2024-11-04 11:49:20.067037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.798 [2024-11-04 11:49:20.067055] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:54.798 [2024-11-04 11:49:20.067075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.798 "name": "Existed_Raid", 00:16:54.798 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:54.798 "strip_size_kb": 64, 00:16:54.798 "state": "configuring", 00:16:54.798 "raid_level": "raid5f", 00:16:54.798 "superblock": true, 00:16:54.798 "num_base_bdevs": 4, 00:16:54.798 "num_base_bdevs_discovered": 1, 00:16:54.798 "num_base_bdevs_operational": 4, 00:16:54.798 "base_bdevs_list": [ 00:16:54.798 { 00:16:54.798 "name": "BaseBdev1", 00:16:54.798 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:54.798 "is_configured": true, 00:16:54.798 "data_offset": 2048, 00:16:54.798 "data_size": 63488 00:16:54.798 }, 00:16:54.798 { 00:16:54.798 "name": "BaseBdev2", 00:16:54.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.798 "is_configured": false, 00:16:54.798 "data_offset": 0, 00:16:54.798 "data_size": 0 00:16:54.798 }, 00:16:54.798 { 00:16:54.798 "name": "BaseBdev3", 00:16:54.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.798 "is_configured": false, 00:16:54.798 "data_offset": 0, 00:16:54.798 "data_size": 0 00:16:54.798 }, 00:16:54.798 { 00:16:54.798 "name": "BaseBdev4", 00:16:54.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.798 "is_configured": false, 00:16:54.798 "data_offset": 0, 00:16:54.798 "data_size": 0 00:16:54.798 } 00:16:54.798 ] 00:16:54.798 }' 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.798 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.058 [2024-11-04 11:49:20.535827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.058 BaseBdev2 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.058 [ 00:16:55.058 { 00:16:55.058 "name": "BaseBdev2", 00:16:55.058 "aliases": [ 00:16:55.058 "216b3208-275e-4bc5-85d8-d18c3771dda7" 00:16:55.058 ], 00:16:55.058 "product_name": "Malloc disk", 00:16:55.058 "block_size": 512, 00:16:55.058 "num_blocks": 65536, 00:16:55.058 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:55.058 "assigned_rate_limits": { 00:16:55.058 "rw_ios_per_sec": 0, 00:16:55.058 "rw_mbytes_per_sec": 0, 00:16:55.058 "r_mbytes_per_sec": 0, 00:16:55.058 "w_mbytes_per_sec": 0 00:16:55.058 }, 00:16:55.058 "claimed": true, 00:16:55.058 "claim_type": "exclusive_write", 00:16:55.058 "zoned": false, 00:16:55.058 "supported_io_types": { 00:16:55.058 "read": true, 00:16:55.058 "write": true, 00:16:55.058 "unmap": true, 00:16:55.058 "flush": true, 00:16:55.058 "reset": true, 00:16:55.058 "nvme_admin": false, 00:16:55.058 "nvme_io": false, 00:16:55.058 "nvme_io_md": false, 00:16:55.058 "write_zeroes": true, 00:16:55.058 "zcopy": true, 00:16:55.058 "get_zone_info": false, 00:16:55.058 "zone_management": false, 00:16:55.058 "zone_append": false, 00:16:55.058 "compare": false, 00:16:55.058 "compare_and_write": false, 00:16:55.058 "abort": true, 00:16:55.058 "seek_hole": false, 00:16:55.058 "seek_data": false, 00:16:55.058 "copy": true, 00:16:55.058 "nvme_iov_md": false 00:16:55.058 }, 00:16:55.058 "memory_domains": [ 00:16:55.058 { 00:16:55.058 "dma_device_id": "system", 00:16:55.058 "dma_device_type": 1 00:16:55.058 }, 00:16:55.058 { 00:16:55.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.058 "dma_device_type": 2 00:16:55.058 } 00:16:55.058 ], 00:16:55.058 "driver_specific": {} 00:16:55.058 } 00:16:55.058 ] 00:16:55.058 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.318 "name": "Existed_Raid", 00:16:55.318 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:55.318 "strip_size_kb": 64, 00:16:55.318 "state": "configuring", 00:16:55.318 "raid_level": "raid5f", 00:16:55.318 "superblock": true, 00:16:55.318 "num_base_bdevs": 4, 00:16:55.318 "num_base_bdevs_discovered": 2, 00:16:55.318 "num_base_bdevs_operational": 4, 00:16:55.318 "base_bdevs_list": [ 00:16:55.318 { 00:16:55.318 "name": "BaseBdev1", 00:16:55.318 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:55.318 "is_configured": true, 00:16:55.318 "data_offset": 2048, 00:16:55.318 "data_size": 63488 00:16:55.318 }, 00:16:55.318 { 00:16:55.318 "name": "BaseBdev2", 00:16:55.318 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:55.318 "is_configured": true, 00:16:55.318 "data_offset": 2048, 00:16:55.318 "data_size": 63488 00:16:55.318 }, 00:16:55.318 { 00:16:55.318 "name": "BaseBdev3", 00:16:55.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.318 "is_configured": false, 00:16:55.318 "data_offset": 0, 00:16:55.318 "data_size": 0 00:16:55.318 }, 00:16:55.318 { 00:16:55.318 "name": "BaseBdev4", 00:16:55.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.318 "is_configured": false, 00:16:55.318 "data_offset": 0, 00:16:55.318 "data_size": 0 00:16:55.318 } 00:16:55.318 ] 00:16:55.318 }' 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.318 11:49:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.577 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:55.577 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.577 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.577 [2024-11-04 11:49:21.097669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.577 BaseBdev3 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.836 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.837 [ 00:16:55.837 { 00:16:55.837 "name": "BaseBdev3", 00:16:55.837 "aliases": [ 00:16:55.837 "d6cad02b-c701-4911-a064-d2d6095d6ad6" 00:16:55.837 ], 00:16:55.837 "product_name": "Malloc disk", 00:16:55.837 "block_size": 512, 00:16:55.837 "num_blocks": 65536, 00:16:55.837 "uuid": "d6cad02b-c701-4911-a064-d2d6095d6ad6", 00:16:55.837 "assigned_rate_limits": { 00:16:55.837 "rw_ios_per_sec": 0, 00:16:55.837 "rw_mbytes_per_sec": 0, 00:16:55.837 "r_mbytes_per_sec": 0, 00:16:55.837 "w_mbytes_per_sec": 0 00:16:55.837 }, 00:16:55.837 "claimed": true, 00:16:55.837 "claim_type": "exclusive_write", 00:16:55.837 "zoned": false, 00:16:55.837 "supported_io_types": { 00:16:55.837 "read": true, 00:16:55.837 "write": true, 00:16:55.837 "unmap": true, 00:16:55.837 "flush": true, 00:16:55.837 "reset": true, 00:16:55.837 "nvme_admin": false, 00:16:55.837 "nvme_io": false, 00:16:55.837 "nvme_io_md": false, 00:16:55.837 "write_zeroes": true, 00:16:55.837 "zcopy": true, 00:16:55.837 "get_zone_info": false, 00:16:55.837 "zone_management": false, 00:16:55.837 "zone_append": false, 00:16:55.837 "compare": false, 00:16:55.837 "compare_and_write": false, 00:16:55.837 "abort": true, 00:16:55.837 "seek_hole": false, 00:16:55.837 "seek_data": false, 00:16:55.837 "copy": true, 00:16:55.837 "nvme_iov_md": false 00:16:55.837 }, 00:16:55.837 "memory_domains": [ 00:16:55.837 { 00:16:55.837 "dma_device_id": "system", 00:16:55.837 "dma_device_type": 1 00:16:55.837 }, 00:16:55.837 { 00:16:55.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.837 "dma_device_type": 2 00:16:55.837 } 00:16:55.837 ], 00:16:55.837 "driver_specific": {} 00:16:55.837 } 00:16:55.837 ] 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.837 "name": "Existed_Raid", 00:16:55.837 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:55.837 "strip_size_kb": 64, 00:16:55.837 "state": "configuring", 00:16:55.837 "raid_level": "raid5f", 00:16:55.837 "superblock": true, 00:16:55.837 "num_base_bdevs": 4, 00:16:55.837 "num_base_bdevs_discovered": 3, 00:16:55.837 "num_base_bdevs_operational": 4, 00:16:55.837 "base_bdevs_list": [ 00:16:55.837 { 00:16:55.837 "name": "BaseBdev1", 00:16:55.837 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:55.837 "is_configured": true, 00:16:55.837 "data_offset": 2048, 00:16:55.837 "data_size": 63488 00:16:55.837 }, 00:16:55.837 { 00:16:55.837 "name": "BaseBdev2", 00:16:55.837 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:55.837 "is_configured": true, 00:16:55.837 "data_offset": 2048, 00:16:55.837 "data_size": 63488 00:16:55.837 }, 00:16:55.837 { 00:16:55.837 "name": "BaseBdev3", 00:16:55.837 "uuid": "d6cad02b-c701-4911-a064-d2d6095d6ad6", 00:16:55.837 "is_configured": true, 00:16:55.837 "data_offset": 2048, 00:16:55.837 "data_size": 63488 00:16:55.837 }, 00:16:55.837 { 00:16:55.837 "name": "BaseBdev4", 00:16:55.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.837 "is_configured": false, 00:16:55.837 "data_offset": 0, 00:16:55.837 "data_size": 0 00:16:55.837 } 00:16:55.837 ] 00:16:55.837 }' 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.837 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.097 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:56.097 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.097 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 [2024-11-04 11:49:21.623050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:56.356 [2024-11-04 11:49:21.623493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:56.356 [2024-11-04 11:49:21.623547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:56.356 [2024-11-04 11:49:21.623846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:56.356 BaseBdev4 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 [2024-11-04 11:49:21.631598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:56.356 [2024-11-04 11:49:21.631660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:56.356 [2024-11-04 11:49:21.631990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.356 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.356 [ 00:16:56.356 { 00:16:56.356 "name": "BaseBdev4", 00:16:56.356 "aliases": [ 00:16:56.356 "ddf973e9-d494-4456-9560-7f1e6d1b136b" 00:16:56.356 ], 00:16:56.356 "product_name": "Malloc disk", 00:16:56.356 "block_size": 512, 00:16:56.356 "num_blocks": 65536, 00:16:56.356 "uuid": "ddf973e9-d494-4456-9560-7f1e6d1b136b", 00:16:56.356 "assigned_rate_limits": { 00:16:56.356 "rw_ios_per_sec": 0, 00:16:56.356 "rw_mbytes_per_sec": 0, 00:16:56.356 "r_mbytes_per_sec": 0, 00:16:56.356 "w_mbytes_per_sec": 0 00:16:56.356 }, 00:16:56.356 "claimed": true, 00:16:56.356 "claim_type": "exclusive_write", 00:16:56.356 "zoned": false, 00:16:56.356 "supported_io_types": { 00:16:56.356 "read": true, 00:16:56.356 "write": true, 00:16:56.356 "unmap": true, 00:16:56.356 "flush": true, 00:16:56.356 "reset": true, 00:16:56.356 "nvme_admin": false, 00:16:56.356 "nvme_io": false, 00:16:56.356 "nvme_io_md": false, 00:16:56.356 "write_zeroes": true, 00:16:56.356 "zcopy": true, 00:16:56.356 "get_zone_info": false, 00:16:56.356 "zone_management": false, 00:16:56.356 "zone_append": false, 00:16:56.356 "compare": false, 00:16:56.356 "compare_and_write": false, 00:16:56.356 "abort": true, 00:16:56.356 "seek_hole": false, 00:16:56.356 "seek_data": false, 00:16:56.356 "copy": true, 00:16:56.356 "nvme_iov_md": false 00:16:56.356 }, 00:16:56.356 "memory_domains": [ 00:16:56.356 { 00:16:56.356 "dma_device_id": "system", 00:16:56.356 "dma_device_type": 1 00:16:56.356 }, 00:16:56.356 { 00:16:56.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.356 "dma_device_type": 2 00:16:56.356 } 00:16:56.356 ], 00:16:56.356 "driver_specific": {} 00:16:56.356 } 00:16:56.357 ] 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.357 "name": "Existed_Raid", 00:16:56.357 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:56.357 "strip_size_kb": 64, 00:16:56.357 "state": "online", 00:16:56.357 "raid_level": "raid5f", 00:16:56.357 "superblock": true, 00:16:56.357 "num_base_bdevs": 4, 00:16:56.357 "num_base_bdevs_discovered": 4, 00:16:56.357 "num_base_bdevs_operational": 4, 00:16:56.357 "base_bdevs_list": [ 00:16:56.357 { 00:16:56.357 "name": "BaseBdev1", 00:16:56.357 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:56.357 "is_configured": true, 00:16:56.357 "data_offset": 2048, 00:16:56.357 "data_size": 63488 00:16:56.357 }, 00:16:56.357 { 00:16:56.357 "name": "BaseBdev2", 00:16:56.357 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:56.357 "is_configured": true, 00:16:56.357 "data_offset": 2048, 00:16:56.357 "data_size": 63488 00:16:56.357 }, 00:16:56.357 { 00:16:56.357 "name": "BaseBdev3", 00:16:56.357 "uuid": "d6cad02b-c701-4911-a064-d2d6095d6ad6", 00:16:56.357 "is_configured": true, 00:16:56.357 "data_offset": 2048, 00:16:56.357 "data_size": 63488 00:16:56.357 }, 00:16:56.357 { 00:16:56.357 "name": "BaseBdev4", 00:16:56.357 "uuid": "ddf973e9-d494-4456-9560-7f1e6d1b136b", 00:16:56.357 "is_configured": true, 00:16:56.357 "data_offset": 2048, 00:16:56.357 "data_size": 63488 00:16:56.357 } 00:16:56.357 ] 00:16:56.357 }' 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.357 11:49:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.616 [2024-11-04 11:49:22.095489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.616 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.616 "name": "Existed_Raid", 00:16:56.616 "aliases": [ 00:16:56.616 "04229bbb-9001-4f71-a68c-1bb7af3f24ac" 00:16:56.616 ], 00:16:56.616 "product_name": "Raid Volume", 00:16:56.616 "block_size": 512, 00:16:56.616 "num_blocks": 190464, 00:16:56.616 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:56.616 "assigned_rate_limits": { 00:16:56.616 "rw_ios_per_sec": 0, 00:16:56.616 "rw_mbytes_per_sec": 0, 00:16:56.616 "r_mbytes_per_sec": 0, 00:16:56.616 "w_mbytes_per_sec": 0 00:16:56.616 }, 00:16:56.616 "claimed": false, 00:16:56.616 "zoned": false, 00:16:56.616 "supported_io_types": { 00:16:56.616 "read": true, 00:16:56.616 "write": true, 00:16:56.616 "unmap": false, 00:16:56.616 "flush": false, 00:16:56.616 "reset": true, 00:16:56.616 "nvme_admin": false, 00:16:56.616 "nvme_io": false, 00:16:56.616 "nvme_io_md": false, 00:16:56.616 "write_zeroes": true, 00:16:56.616 "zcopy": false, 00:16:56.616 "get_zone_info": false, 00:16:56.616 "zone_management": false, 00:16:56.616 "zone_append": false, 00:16:56.616 "compare": false, 00:16:56.616 "compare_and_write": false, 00:16:56.616 "abort": false, 00:16:56.616 "seek_hole": false, 00:16:56.616 "seek_data": false, 00:16:56.616 "copy": false, 00:16:56.616 "nvme_iov_md": false 00:16:56.616 }, 00:16:56.616 "driver_specific": { 00:16:56.616 "raid": { 00:16:56.616 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:56.616 "strip_size_kb": 64, 00:16:56.616 "state": "online", 00:16:56.616 "raid_level": "raid5f", 00:16:56.616 "superblock": true, 00:16:56.616 "num_base_bdevs": 4, 00:16:56.616 "num_base_bdevs_discovered": 4, 00:16:56.616 "num_base_bdevs_operational": 4, 00:16:56.616 "base_bdevs_list": [ 00:16:56.616 { 00:16:56.616 "name": "BaseBdev1", 00:16:56.616 "uuid": "8baa3c32-0ef3-4b8a-a8bf-8879b39ce9bf", 00:16:56.616 "is_configured": true, 00:16:56.616 "data_offset": 2048, 00:16:56.616 "data_size": 63488 00:16:56.616 }, 00:16:56.616 { 00:16:56.616 "name": "BaseBdev2", 00:16:56.616 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:56.616 "is_configured": true, 00:16:56.617 "data_offset": 2048, 00:16:56.617 "data_size": 63488 00:16:56.617 }, 00:16:56.617 { 00:16:56.617 "name": "BaseBdev3", 00:16:56.617 "uuid": "d6cad02b-c701-4911-a064-d2d6095d6ad6", 00:16:56.617 "is_configured": true, 00:16:56.617 "data_offset": 2048, 00:16:56.617 "data_size": 63488 00:16:56.617 }, 00:16:56.617 { 00:16:56.617 "name": "BaseBdev4", 00:16:56.617 "uuid": "ddf973e9-d494-4456-9560-7f1e6d1b136b", 00:16:56.617 "is_configured": true, 00:16:56.617 "data_offset": 2048, 00:16:56.617 "data_size": 63488 00:16:56.617 } 00:16:56.617 ] 00:16:56.617 } 00:16:56.617 } 00:16:56.617 }' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:56.875 BaseBdev2 00:16:56.875 BaseBdev3 00:16:56.875 BaseBdev4' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.875 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.876 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.134 [2024-11-04 11:49:22.398763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.134 "name": "Existed_Raid", 00:16:57.134 "uuid": "04229bbb-9001-4f71-a68c-1bb7af3f24ac", 00:16:57.134 "strip_size_kb": 64, 00:16:57.134 "state": "online", 00:16:57.134 "raid_level": "raid5f", 00:16:57.134 "superblock": true, 00:16:57.134 "num_base_bdevs": 4, 00:16:57.134 "num_base_bdevs_discovered": 3, 00:16:57.134 "num_base_bdevs_operational": 3, 00:16:57.134 "base_bdevs_list": [ 00:16:57.134 { 00:16:57.134 "name": null, 00:16:57.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.134 "is_configured": false, 00:16:57.134 "data_offset": 0, 00:16:57.134 "data_size": 63488 00:16:57.134 }, 00:16:57.134 { 00:16:57.134 "name": "BaseBdev2", 00:16:57.134 "uuid": "216b3208-275e-4bc5-85d8-d18c3771dda7", 00:16:57.134 "is_configured": true, 00:16:57.134 "data_offset": 2048, 00:16:57.134 "data_size": 63488 00:16:57.134 }, 00:16:57.134 { 00:16:57.134 "name": "BaseBdev3", 00:16:57.134 "uuid": "d6cad02b-c701-4911-a064-d2d6095d6ad6", 00:16:57.134 "is_configured": true, 00:16:57.134 "data_offset": 2048, 00:16:57.134 "data_size": 63488 00:16:57.134 }, 00:16:57.134 { 00:16:57.134 "name": "BaseBdev4", 00:16:57.134 "uuid": "ddf973e9-d494-4456-9560-7f1e6d1b136b", 00:16:57.134 "is_configured": true, 00:16:57.134 "data_offset": 2048, 00:16:57.134 "data_size": 63488 00:16:57.134 } 00:16:57.134 ] 00:16:57.134 }' 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.134 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.702 11:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 [2024-11-04 11:49:23.007305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.702 [2024-11-04 11:49:23.007539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.702 [2024-11-04 11:49:23.100929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.702 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.702 [2024-11-04 11:49:23.160878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 [2024-11-04 11:49:23.320814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:57.961 [2024-11-04 11:49:23.320921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.961 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 BaseBdev2 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 [ 00:16:58.221 { 00:16:58.221 "name": "BaseBdev2", 00:16:58.221 "aliases": [ 00:16:58.221 "193d2823-94e7-4adb-9b4d-feea3670d4cb" 00:16:58.221 ], 00:16:58.221 "product_name": "Malloc disk", 00:16:58.221 "block_size": 512, 00:16:58.221 "num_blocks": 65536, 00:16:58.221 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:16:58.221 "assigned_rate_limits": { 00:16:58.221 "rw_ios_per_sec": 0, 00:16:58.221 "rw_mbytes_per_sec": 0, 00:16:58.221 "r_mbytes_per_sec": 0, 00:16:58.221 "w_mbytes_per_sec": 0 00:16:58.221 }, 00:16:58.221 "claimed": false, 00:16:58.221 "zoned": false, 00:16:58.221 "supported_io_types": { 00:16:58.221 "read": true, 00:16:58.221 "write": true, 00:16:58.221 "unmap": true, 00:16:58.221 "flush": true, 00:16:58.221 "reset": true, 00:16:58.221 "nvme_admin": false, 00:16:58.221 "nvme_io": false, 00:16:58.221 "nvme_io_md": false, 00:16:58.221 "write_zeroes": true, 00:16:58.221 "zcopy": true, 00:16:58.221 "get_zone_info": false, 00:16:58.221 "zone_management": false, 00:16:58.221 "zone_append": false, 00:16:58.221 "compare": false, 00:16:58.221 "compare_and_write": false, 00:16:58.221 "abort": true, 00:16:58.221 "seek_hole": false, 00:16:58.221 "seek_data": false, 00:16:58.221 "copy": true, 00:16:58.221 "nvme_iov_md": false 00:16:58.221 }, 00:16:58.221 "memory_domains": [ 00:16:58.221 { 00:16:58.221 "dma_device_id": "system", 00:16:58.221 "dma_device_type": 1 00:16:58.221 }, 00:16:58.221 { 00:16:58.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.221 "dma_device_type": 2 00:16:58.221 } 00:16:58.221 ], 00:16:58.221 "driver_specific": {} 00:16:58.221 } 00:16:58.221 ] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 BaseBdev3 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.221 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 [ 00:16:58.221 { 00:16:58.221 "name": "BaseBdev3", 00:16:58.221 "aliases": [ 00:16:58.221 "d3ac1fd6-844b-49b1-8843-fe55f2fc9724" 00:16:58.221 ], 00:16:58.221 "product_name": "Malloc disk", 00:16:58.221 "block_size": 512, 00:16:58.221 "num_blocks": 65536, 00:16:58.221 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:16:58.221 "assigned_rate_limits": { 00:16:58.221 "rw_ios_per_sec": 0, 00:16:58.221 "rw_mbytes_per_sec": 0, 00:16:58.221 "r_mbytes_per_sec": 0, 00:16:58.221 "w_mbytes_per_sec": 0 00:16:58.221 }, 00:16:58.221 "claimed": false, 00:16:58.221 "zoned": false, 00:16:58.221 "supported_io_types": { 00:16:58.221 "read": true, 00:16:58.221 "write": true, 00:16:58.221 "unmap": true, 00:16:58.221 "flush": true, 00:16:58.221 "reset": true, 00:16:58.221 "nvme_admin": false, 00:16:58.221 "nvme_io": false, 00:16:58.221 "nvme_io_md": false, 00:16:58.221 "write_zeroes": true, 00:16:58.221 "zcopy": true, 00:16:58.221 "get_zone_info": false, 00:16:58.221 "zone_management": false, 00:16:58.221 "zone_append": false, 00:16:58.221 "compare": false, 00:16:58.221 "compare_and_write": false, 00:16:58.221 "abort": true, 00:16:58.221 "seek_hole": false, 00:16:58.222 "seek_data": false, 00:16:58.222 "copy": true, 00:16:58.222 "nvme_iov_md": false 00:16:58.222 }, 00:16:58.222 "memory_domains": [ 00:16:58.222 { 00:16:58.222 "dma_device_id": "system", 00:16:58.222 "dma_device_type": 1 00:16:58.222 }, 00:16:58.222 { 00:16:58.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.222 "dma_device_type": 2 00:16:58.222 } 00:16:58.222 ], 00:16:58.222 "driver_specific": {} 00:16:58.222 } 00:16:58.222 ] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.222 BaseBdev4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.222 [ 00:16:58.222 { 00:16:58.222 "name": "BaseBdev4", 00:16:58.222 "aliases": [ 00:16:58.222 "6a04bdd9-bc75-48df-b07f-55b88cb657c3" 00:16:58.222 ], 00:16:58.222 "product_name": "Malloc disk", 00:16:58.222 "block_size": 512, 00:16:58.222 "num_blocks": 65536, 00:16:58.222 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:16:58.222 "assigned_rate_limits": { 00:16:58.222 "rw_ios_per_sec": 0, 00:16:58.222 "rw_mbytes_per_sec": 0, 00:16:58.222 "r_mbytes_per_sec": 0, 00:16:58.222 "w_mbytes_per_sec": 0 00:16:58.222 }, 00:16:58.222 "claimed": false, 00:16:58.222 "zoned": false, 00:16:58.222 "supported_io_types": { 00:16:58.222 "read": true, 00:16:58.222 "write": true, 00:16:58.222 "unmap": true, 00:16:58.222 "flush": true, 00:16:58.222 "reset": true, 00:16:58.222 "nvme_admin": false, 00:16:58.222 "nvme_io": false, 00:16:58.222 "nvme_io_md": false, 00:16:58.222 "write_zeroes": true, 00:16:58.222 "zcopy": true, 00:16:58.222 "get_zone_info": false, 00:16:58.222 "zone_management": false, 00:16:58.222 "zone_append": false, 00:16:58.222 "compare": false, 00:16:58.222 "compare_and_write": false, 00:16:58.222 "abort": true, 00:16:58.222 "seek_hole": false, 00:16:58.222 "seek_data": false, 00:16:58.222 "copy": true, 00:16:58.222 "nvme_iov_md": false 00:16:58.222 }, 00:16:58.222 "memory_domains": [ 00:16:58.222 { 00:16:58.222 "dma_device_id": "system", 00:16:58.222 "dma_device_type": 1 00:16:58.222 }, 00:16:58.222 { 00:16:58.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.222 "dma_device_type": 2 00:16:58.222 } 00:16:58.222 ], 00:16:58.222 "driver_specific": {} 00:16:58.222 } 00:16:58.222 ] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.222 [2024-11-04 11:49:23.731174] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:58.222 [2024-11-04 11:49:23.731262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:58.222 [2024-11-04 11:49:23.731303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.222 [2024-11-04 11:49:23.733111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.222 [2024-11-04 11:49:23.733207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.222 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.503 "name": "Existed_Raid", 00:16:58.503 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:16:58.503 "strip_size_kb": 64, 00:16:58.503 "state": "configuring", 00:16:58.503 "raid_level": "raid5f", 00:16:58.503 "superblock": true, 00:16:58.503 "num_base_bdevs": 4, 00:16:58.503 "num_base_bdevs_discovered": 3, 00:16:58.503 "num_base_bdevs_operational": 4, 00:16:58.503 "base_bdevs_list": [ 00:16:58.503 { 00:16:58.503 "name": "BaseBdev1", 00:16:58.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.503 "is_configured": false, 00:16:58.503 "data_offset": 0, 00:16:58.503 "data_size": 0 00:16:58.503 }, 00:16:58.503 { 00:16:58.503 "name": "BaseBdev2", 00:16:58.503 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:16:58.503 "is_configured": true, 00:16:58.503 "data_offset": 2048, 00:16:58.503 "data_size": 63488 00:16:58.503 }, 00:16:58.503 { 00:16:58.503 "name": "BaseBdev3", 00:16:58.503 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:16:58.503 "is_configured": true, 00:16:58.503 "data_offset": 2048, 00:16:58.503 "data_size": 63488 00:16:58.503 }, 00:16:58.503 { 00:16:58.503 "name": "BaseBdev4", 00:16:58.503 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:16:58.503 "is_configured": true, 00:16:58.503 "data_offset": 2048, 00:16:58.503 "data_size": 63488 00:16:58.503 } 00:16:58.503 ] 00:16:58.503 }' 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.503 11:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.761 [2024-11-04 11:49:24.190439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.761 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.762 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.762 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.762 "name": "Existed_Raid", 00:16:58.762 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:16:58.762 "strip_size_kb": 64, 00:16:58.762 "state": "configuring", 00:16:58.762 "raid_level": "raid5f", 00:16:58.762 "superblock": true, 00:16:58.762 "num_base_bdevs": 4, 00:16:58.762 "num_base_bdevs_discovered": 2, 00:16:58.762 "num_base_bdevs_operational": 4, 00:16:58.762 "base_bdevs_list": [ 00:16:58.762 { 00:16:58.762 "name": "BaseBdev1", 00:16:58.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.762 "is_configured": false, 00:16:58.762 "data_offset": 0, 00:16:58.762 "data_size": 0 00:16:58.762 }, 00:16:58.762 { 00:16:58.762 "name": null, 00:16:58.762 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:16:58.762 "is_configured": false, 00:16:58.762 "data_offset": 0, 00:16:58.762 "data_size": 63488 00:16:58.762 }, 00:16:58.762 { 00:16:58.762 "name": "BaseBdev3", 00:16:58.762 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:16:58.762 "is_configured": true, 00:16:58.762 "data_offset": 2048, 00:16:58.762 "data_size": 63488 00:16:58.762 }, 00:16:58.762 { 00:16:58.762 "name": "BaseBdev4", 00:16:58.762 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:16:58.762 "is_configured": true, 00:16:58.762 "data_offset": 2048, 00:16:58.762 "data_size": 63488 00:16:58.762 } 00:16:58.762 ] 00:16:58.762 }' 00:16:58.762 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.762 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.328 [2024-11-04 11:49:24.739988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.328 BaseBdev1 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.328 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.328 [ 00:16:59.328 { 00:16:59.328 "name": "BaseBdev1", 00:16:59.328 "aliases": [ 00:16:59.328 "b9d4ad7c-452f-4541-bd4f-d9c418e124aa" 00:16:59.328 ], 00:16:59.328 "product_name": "Malloc disk", 00:16:59.328 "block_size": 512, 00:16:59.328 "num_blocks": 65536, 00:16:59.328 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:16:59.328 "assigned_rate_limits": { 00:16:59.328 "rw_ios_per_sec": 0, 00:16:59.328 "rw_mbytes_per_sec": 0, 00:16:59.328 "r_mbytes_per_sec": 0, 00:16:59.328 "w_mbytes_per_sec": 0 00:16:59.328 }, 00:16:59.328 "claimed": true, 00:16:59.328 "claim_type": "exclusive_write", 00:16:59.328 "zoned": false, 00:16:59.328 "supported_io_types": { 00:16:59.328 "read": true, 00:16:59.328 "write": true, 00:16:59.328 "unmap": true, 00:16:59.328 "flush": true, 00:16:59.328 "reset": true, 00:16:59.328 "nvme_admin": false, 00:16:59.328 "nvme_io": false, 00:16:59.328 "nvme_io_md": false, 00:16:59.328 "write_zeroes": true, 00:16:59.328 "zcopy": true, 00:16:59.328 "get_zone_info": false, 00:16:59.328 "zone_management": false, 00:16:59.328 "zone_append": false, 00:16:59.328 "compare": false, 00:16:59.328 "compare_and_write": false, 00:16:59.328 "abort": true, 00:16:59.328 "seek_hole": false, 00:16:59.328 "seek_data": false, 00:16:59.328 "copy": true, 00:16:59.328 "nvme_iov_md": false 00:16:59.328 }, 00:16:59.328 "memory_domains": [ 00:16:59.328 { 00:16:59.328 "dma_device_id": "system", 00:16:59.328 "dma_device_type": 1 00:16:59.328 }, 00:16:59.328 { 00:16:59.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.329 "dma_device_type": 2 00:16:59.329 } 00:16:59.329 ], 00:16:59.329 "driver_specific": {} 00:16:59.329 } 00:16:59.329 ] 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.329 "name": "Existed_Raid", 00:16:59.329 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:16:59.329 "strip_size_kb": 64, 00:16:59.329 "state": "configuring", 00:16:59.329 "raid_level": "raid5f", 00:16:59.329 "superblock": true, 00:16:59.329 "num_base_bdevs": 4, 00:16:59.329 "num_base_bdevs_discovered": 3, 00:16:59.329 "num_base_bdevs_operational": 4, 00:16:59.329 "base_bdevs_list": [ 00:16:59.329 { 00:16:59.329 "name": "BaseBdev1", 00:16:59.329 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:16:59.329 "is_configured": true, 00:16:59.329 "data_offset": 2048, 00:16:59.329 "data_size": 63488 00:16:59.329 }, 00:16:59.329 { 00:16:59.329 "name": null, 00:16:59.329 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:16:59.329 "is_configured": false, 00:16:59.329 "data_offset": 0, 00:16:59.329 "data_size": 63488 00:16:59.329 }, 00:16:59.329 { 00:16:59.329 "name": "BaseBdev3", 00:16:59.329 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:16:59.329 "is_configured": true, 00:16:59.329 "data_offset": 2048, 00:16:59.329 "data_size": 63488 00:16:59.329 }, 00:16:59.329 { 00:16:59.329 "name": "BaseBdev4", 00:16:59.329 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:16:59.329 "is_configured": true, 00:16:59.329 "data_offset": 2048, 00:16:59.329 "data_size": 63488 00:16:59.329 } 00:16:59.329 ] 00:16:59.329 }' 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.329 11:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 [2024-11-04 11:49:25.231208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.896 "name": "Existed_Raid", 00:16:59.896 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:16:59.896 "strip_size_kb": 64, 00:16:59.896 "state": "configuring", 00:16:59.896 "raid_level": "raid5f", 00:16:59.896 "superblock": true, 00:16:59.896 "num_base_bdevs": 4, 00:16:59.896 "num_base_bdevs_discovered": 2, 00:16:59.896 "num_base_bdevs_operational": 4, 00:16:59.896 "base_bdevs_list": [ 00:16:59.896 { 00:16:59.896 "name": "BaseBdev1", 00:16:59.896 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:16:59.896 "is_configured": true, 00:16:59.896 "data_offset": 2048, 00:16:59.896 "data_size": 63488 00:16:59.896 }, 00:16:59.896 { 00:16:59.896 "name": null, 00:16:59.896 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:16:59.896 "is_configured": false, 00:16:59.896 "data_offset": 0, 00:16:59.896 "data_size": 63488 00:16:59.896 }, 00:16:59.896 { 00:16:59.896 "name": null, 00:16:59.896 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:16:59.896 "is_configured": false, 00:16:59.896 "data_offset": 0, 00:16:59.896 "data_size": 63488 00:16:59.896 }, 00:16:59.896 { 00:16:59.896 "name": "BaseBdev4", 00:16:59.896 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:16:59.896 "is_configured": true, 00:16:59.896 "data_offset": 2048, 00:16:59.896 "data_size": 63488 00:16:59.896 } 00:16:59.896 ] 00:16:59.896 }' 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.896 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.155 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.155 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.155 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.155 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.445 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.445 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:00.445 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:00.445 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.445 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.445 [2024-11-04 11:49:25.718369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.446 "name": "Existed_Raid", 00:17:00.446 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:00.446 "strip_size_kb": 64, 00:17:00.446 "state": "configuring", 00:17:00.446 "raid_level": "raid5f", 00:17:00.446 "superblock": true, 00:17:00.446 "num_base_bdevs": 4, 00:17:00.446 "num_base_bdevs_discovered": 3, 00:17:00.446 "num_base_bdevs_operational": 4, 00:17:00.446 "base_bdevs_list": [ 00:17:00.446 { 00:17:00.446 "name": "BaseBdev1", 00:17:00.446 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:00.446 "is_configured": true, 00:17:00.446 "data_offset": 2048, 00:17:00.446 "data_size": 63488 00:17:00.446 }, 00:17:00.446 { 00:17:00.446 "name": null, 00:17:00.446 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:17:00.446 "is_configured": false, 00:17:00.446 "data_offset": 0, 00:17:00.446 "data_size": 63488 00:17:00.446 }, 00:17:00.446 { 00:17:00.446 "name": "BaseBdev3", 00:17:00.446 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:17:00.446 "is_configured": true, 00:17:00.446 "data_offset": 2048, 00:17:00.446 "data_size": 63488 00:17:00.446 }, 00:17:00.446 { 00:17:00.446 "name": "BaseBdev4", 00:17:00.446 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:17:00.446 "is_configured": true, 00:17:00.446 "data_offset": 2048, 00:17:00.446 "data_size": 63488 00:17:00.446 } 00:17:00.446 ] 00:17:00.446 }' 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.446 11:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.703 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.703 [2024-11-04 11:49:26.181590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.962 "name": "Existed_Raid", 00:17:00.962 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:00.962 "strip_size_kb": 64, 00:17:00.962 "state": "configuring", 00:17:00.962 "raid_level": "raid5f", 00:17:00.962 "superblock": true, 00:17:00.962 "num_base_bdevs": 4, 00:17:00.962 "num_base_bdevs_discovered": 2, 00:17:00.962 "num_base_bdevs_operational": 4, 00:17:00.962 "base_bdevs_list": [ 00:17:00.962 { 00:17:00.962 "name": null, 00:17:00.962 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:00.962 "is_configured": false, 00:17:00.962 "data_offset": 0, 00:17:00.962 "data_size": 63488 00:17:00.962 }, 00:17:00.962 { 00:17:00.962 "name": null, 00:17:00.962 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:17:00.962 "is_configured": false, 00:17:00.962 "data_offset": 0, 00:17:00.962 "data_size": 63488 00:17:00.962 }, 00:17:00.962 { 00:17:00.962 "name": "BaseBdev3", 00:17:00.962 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:17:00.962 "is_configured": true, 00:17:00.962 "data_offset": 2048, 00:17:00.962 "data_size": 63488 00:17:00.962 }, 00:17:00.962 { 00:17:00.962 "name": "BaseBdev4", 00:17:00.962 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:17:00.962 "is_configured": true, 00:17:00.962 "data_offset": 2048, 00:17:00.962 "data_size": 63488 00:17:00.962 } 00:17:00.962 ] 00:17:00.962 }' 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.962 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.220 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:01.220 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.220 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.220 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.610 [2024-11-04 11:49:26.787828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.610 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.610 "name": "Existed_Raid", 00:17:01.610 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:01.610 "strip_size_kb": 64, 00:17:01.610 "state": "configuring", 00:17:01.610 "raid_level": "raid5f", 00:17:01.610 "superblock": true, 00:17:01.610 "num_base_bdevs": 4, 00:17:01.610 "num_base_bdevs_discovered": 3, 00:17:01.610 "num_base_bdevs_operational": 4, 00:17:01.610 "base_bdevs_list": [ 00:17:01.610 { 00:17:01.610 "name": null, 00:17:01.610 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:01.610 "is_configured": false, 00:17:01.610 "data_offset": 0, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.610 "name": "BaseBdev2", 00:17:01.610 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:17:01.610 "is_configured": true, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.610 "name": "BaseBdev3", 00:17:01.610 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:17:01.610 "is_configured": true, 00:17:01.610 "data_offset": 2048, 00:17:01.610 "data_size": 63488 00:17:01.610 }, 00:17:01.610 { 00:17:01.611 "name": "BaseBdev4", 00:17:01.611 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:17:01.611 "is_configured": true, 00:17:01.611 "data_offset": 2048, 00:17:01.611 "data_size": 63488 00:17:01.611 } 00:17:01.611 ] 00:17:01.611 }' 00:17:01.611 11:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.611 11:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b9d4ad7c-452f-4541-bd4f-d9c418e124aa 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 [2024-11-04 11:49:27.332297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:01.903 NewBaseBdev 00:17:01.903 [2024-11-04 11:49:27.332700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:01.903 [2024-11-04 11:49:27.332719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.903 [2024-11-04 11:49:27.333013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 [2024-11-04 11:49:27.340615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:01.903 [2024-11-04 11:49:27.340678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:01.903 [2024-11-04 11:49:27.340857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 [ 00:17:01.903 { 00:17:01.903 "name": "NewBaseBdev", 00:17:01.903 "aliases": [ 00:17:01.903 "b9d4ad7c-452f-4541-bd4f-d9c418e124aa" 00:17:01.903 ], 00:17:01.903 "product_name": "Malloc disk", 00:17:01.903 "block_size": 512, 00:17:01.903 "num_blocks": 65536, 00:17:01.903 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:01.903 "assigned_rate_limits": { 00:17:01.903 "rw_ios_per_sec": 0, 00:17:01.903 "rw_mbytes_per_sec": 0, 00:17:01.903 "r_mbytes_per_sec": 0, 00:17:01.903 "w_mbytes_per_sec": 0 00:17:01.903 }, 00:17:01.903 "claimed": true, 00:17:01.903 "claim_type": "exclusive_write", 00:17:01.903 "zoned": false, 00:17:01.903 "supported_io_types": { 00:17:01.903 "read": true, 00:17:01.903 "write": true, 00:17:01.903 "unmap": true, 00:17:01.903 "flush": true, 00:17:01.903 "reset": true, 00:17:01.903 "nvme_admin": false, 00:17:01.903 "nvme_io": false, 00:17:01.903 "nvme_io_md": false, 00:17:01.903 "write_zeroes": true, 00:17:01.903 "zcopy": true, 00:17:01.903 "get_zone_info": false, 00:17:01.903 "zone_management": false, 00:17:01.903 "zone_append": false, 00:17:01.903 "compare": false, 00:17:01.903 "compare_and_write": false, 00:17:01.903 "abort": true, 00:17:01.903 "seek_hole": false, 00:17:01.903 "seek_data": false, 00:17:01.903 "copy": true, 00:17:01.903 "nvme_iov_md": false 00:17:01.903 }, 00:17:01.903 "memory_domains": [ 00:17:01.903 { 00:17:01.903 "dma_device_id": "system", 00:17:01.903 "dma_device_type": 1 00:17:01.903 }, 00:17:01.903 { 00:17:01.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.903 "dma_device_type": 2 00:17:01.903 } 00:17:01.903 ], 00:17:01.903 "driver_specific": {} 00:17:01.903 } 00:17:01.903 ] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.903 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.162 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.162 "name": "Existed_Raid", 00:17:02.162 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:02.162 "strip_size_kb": 64, 00:17:02.162 "state": "online", 00:17:02.162 "raid_level": "raid5f", 00:17:02.162 "superblock": true, 00:17:02.162 "num_base_bdevs": 4, 00:17:02.162 "num_base_bdevs_discovered": 4, 00:17:02.162 "num_base_bdevs_operational": 4, 00:17:02.162 "base_bdevs_list": [ 00:17:02.162 { 00:17:02.162 "name": "NewBaseBdev", 00:17:02.162 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:02.162 "is_configured": true, 00:17:02.162 "data_offset": 2048, 00:17:02.162 "data_size": 63488 00:17:02.162 }, 00:17:02.162 { 00:17:02.162 "name": "BaseBdev2", 00:17:02.162 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:17:02.162 "is_configured": true, 00:17:02.162 "data_offset": 2048, 00:17:02.162 "data_size": 63488 00:17:02.162 }, 00:17:02.162 { 00:17:02.162 "name": "BaseBdev3", 00:17:02.162 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:17:02.162 "is_configured": true, 00:17:02.162 "data_offset": 2048, 00:17:02.162 "data_size": 63488 00:17:02.162 }, 00:17:02.162 { 00:17:02.162 "name": "BaseBdev4", 00:17:02.162 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:17:02.162 "is_configured": true, 00:17:02.162 "data_offset": 2048, 00:17:02.162 "data_size": 63488 00:17:02.162 } 00:17:02.162 ] 00:17:02.162 }' 00:17:02.162 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.162 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.420 [2024-11-04 11:49:27.844383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.420 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.420 "name": "Existed_Raid", 00:17:02.420 "aliases": [ 00:17:02.420 "e12db3fc-cf80-4eea-85b6-c57763aa8702" 00:17:02.420 ], 00:17:02.420 "product_name": "Raid Volume", 00:17:02.420 "block_size": 512, 00:17:02.420 "num_blocks": 190464, 00:17:02.420 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:02.420 "assigned_rate_limits": { 00:17:02.420 "rw_ios_per_sec": 0, 00:17:02.420 "rw_mbytes_per_sec": 0, 00:17:02.420 "r_mbytes_per_sec": 0, 00:17:02.420 "w_mbytes_per_sec": 0 00:17:02.420 }, 00:17:02.420 "claimed": false, 00:17:02.420 "zoned": false, 00:17:02.420 "supported_io_types": { 00:17:02.420 "read": true, 00:17:02.420 "write": true, 00:17:02.420 "unmap": false, 00:17:02.420 "flush": false, 00:17:02.420 "reset": true, 00:17:02.420 "nvme_admin": false, 00:17:02.420 "nvme_io": false, 00:17:02.420 "nvme_io_md": false, 00:17:02.420 "write_zeroes": true, 00:17:02.420 "zcopy": false, 00:17:02.420 "get_zone_info": false, 00:17:02.420 "zone_management": false, 00:17:02.420 "zone_append": false, 00:17:02.420 "compare": false, 00:17:02.420 "compare_and_write": false, 00:17:02.420 "abort": false, 00:17:02.420 "seek_hole": false, 00:17:02.420 "seek_data": false, 00:17:02.420 "copy": false, 00:17:02.420 "nvme_iov_md": false 00:17:02.420 }, 00:17:02.420 "driver_specific": { 00:17:02.420 "raid": { 00:17:02.421 "uuid": "e12db3fc-cf80-4eea-85b6-c57763aa8702", 00:17:02.421 "strip_size_kb": 64, 00:17:02.421 "state": "online", 00:17:02.421 "raid_level": "raid5f", 00:17:02.421 "superblock": true, 00:17:02.421 "num_base_bdevs": 4, 00:17:02.421 "num_base_bdevs_discovered": 4, 00:17:02.421 "num_base_bdevs_operational": 4, 00:17:02.421 "base_bdevs_list": [ 00:17:02.421 { 00:17:02.421 "name": "NewBaseBdev", 00:17:02.421 "uuid": "b9d4ad7c-452f-4541-bd4f-d9c418e124aa", 00:17:02.421 "is_configured": true, 00:17:02.421 "data_offset": 2048, 00:17:02.421 "data_size": 63488 00:17:02.421 }, 00:17:02.421 { 00:17:02.421 "name": "BaseBdev2", 00:17:02.421 "uuid": "193d2823-94e7-4adb-9b4d-feea3670d4cb", 00:17:02.421 "is_configured": true, 00:17:02.421 "data_offset": 2048, 00:17:02.421 "data_size": 63488 00:17:02.421 }, 00:17:02.421 { 00:17:02.421 "name": "BaseBdev3", 00:17:02.421 "uuid": "d3ac1fd6-844b-49b1-8843-fe55f2fc9724", 00:17:02.421 "is_configured": true, 00:17:02.421 "data_offset": 2048, 00:17:02.421 "data_size": 63488 00:17:02.421 }, 00:17:02.421 { 00:17:02.421 "name": "BaseBdev4", 00:17:02.421 "uuid": "6a04bdd9-bc75-48df-b07f-55b88cb657c3", 00:17:02.421 "is_configured": true, 00:17:02.421 "data_offset": 2048, 00:17:02.421 "data_size": 63488 00:17:02.421 } 00:17:02.421 ] 00:17:02.421 } 00:17:02.421 } 00:17:02.421 }' 00:17:02.421 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.421 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:02.421 BaseBdev2 00:17:02.421 BaseBdev3 00:17:02.421 BaseBdev4' 00:17:02.421 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.679 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:02.679 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.679 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:02.679 11:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.680 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 11:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.680 [2024-11-04 11:49:28.159554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.680 [2024-11-04 11:49:28.159623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.680 [2024-11-04 11:49:28.159735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.680 [2024-11-04 11:49:28.160087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.680 [2024-11-04 11:49:28.160145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83679 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83679 ']' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83679 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.680 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83679 00:17:02.937 killing process with pid 83679 00:17:02.937 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:02.937 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:02.937 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83679' 00:17:02.937 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83679 00:17:02.937 [2024-11-04 11:49:28.205779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.937 11:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83679 00:17:03.194 [2024-11-04 11:49:28.593987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.563 11:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.563 00:17:04.563 real 0m11.561s 00:17:04.563 user 0m18.393s 00:17:04.563 sys 0m2.086s 00:17:04.563 11:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:04.563 ************************************ 00:17:04.563 END TEST raid5f_state_function_test_sb 00:17:04.563 ************************************ 00:17:04.563 11:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.563 11:49:29 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:04.563 11:49:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:04.563 11:49:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:04.563 11:49:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.563 ************************************ 00:17:04.563 START TEST raid5f_superblock_test 00:17:04.563 ************************************ 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84350 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84350 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84350 ']' 00:17:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:04.563 11:49:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.563 [2024-11-04 11:49:29.826354] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:17:04.564 [2024-11-04 11:49:29.826580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84350 ] 00:17:04.564 [2024-11-04 11:49:29.979261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.821 [2024-11-04 11:49:30.089122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.821 [2024-11-04 11:49:30.286958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.821 [2024-11-04 11:49:30.287068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.387 malloc1 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.387 [2024-11-04 11:49:30.705458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.387 [2024-11-04 11:49:30.705563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.387 [2024-11-04 11:49:30.705606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.387 [2024-11-04 11:49:30.705667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.387 [2024-11-04 11:49:30.707705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.387 [2024-11-04 11:49:30.707776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.387 pt1 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.387 malloc2 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.387 [2024-11-04 11:49:30.763974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.387 [2024-11-04 11:49:30.764074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.387 [2024-11-04 11:49:30.764111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.387 [2024-11-04 11:49:30.764159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.387 [2024-11-04 11:49:30.766147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.387 [2024-11-04 11:49:30.766233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.387 pt2 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.387 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.388 malloc3 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.388 [2024-11-04 11:49:30.834361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:05.388 [2024-11-04 11:49:30.834462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.388 [2024-11-04 11:49:30.834499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:05.388 [2024-11-04 11:49:30.834548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.388 [2024-11-04 11:49:30.836645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.388 pt3 00:17:05.388 [2024-11-04 11:49:30.836718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.388 malloc4 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.388 [2024-11-04 11:49:30.887903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:05.388 [2024-11-04 11:49:30.888003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.388 [2024-11-04 11:49:30.888059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:05.388 [2024-11-04 11:49:30.888093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.388 [2024-11-04 11:49:30.890107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.388 [2024-11-04 11:49:30.890177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:05.388 pt4 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.388 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.388 [2024-11-04 11:49:30.899911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.388 [2024-11-04 11:49:30.901687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.388 [2024-11-04 11:49:30.901785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.388 [2024-11-04 11:49:30.901882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:05.388 [2024-11-04 11:49:30.902150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.388 [2024-11-04 11:49:30.902201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:05.388 [2024-11-04 11:49:30.902475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:05.646 [2024-11-04 11:49:30.909604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.646 [2024-11-04 11:49:30.909662] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.646 [2024-11-04 11:49:30.909924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.646 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.646 "name": "raid_bdev1", 00:17:05.646 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:05.646 "strip_size_kb": 64, 00:17:05.646 "state": "online", 00:17:05.646 "raid_level": "raid5f", 00:17:05.646 "superblock": true, 00:17:05.646 "num_base_bdevs": 4, 00:17:05.646 "num_base_bdevs_discovered": 4, 00:17:05.646 "num_base_bdevs_operational": 4, 00:17:05.646 "base_bdevs_list": [ 00:17:05.646 { 00:17:05.646 "name": "pt1", 00:17:05.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.646 "is_configured": true, 00:17:05.646 "data_offset": 2048, 00:17:05.646 "data_size": 63488 00:17:05.646 }, 00:17:05.646 { 00:17:05.646 "name": "pt2", 00:17:05.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.646 "is_configured": true, 00:17:05.646 "data_offset": 2048, 00:17:05.646 "data_size": 63488 00:17:05.646 }, 00:17:05.646 { 00:17:05.646 "name": "pt3", 00:17:05.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.646 "is_configured": true, 00:17:05.646 "data_offset": 2048, 00:17:05.646 "data_size": 63488 00:17:05.646 }, 00:17:05.646 { 00:17:05.646 "name": "pt4", 00:17:05.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.646 "is_configured": true, 00:17:05.647 "data_offset": 2048, 00:17:05.647 "data_size": 63488 00:17:05.647 } 00:17:05.647 ] 00:17:05.647 }' 00:17:05.647 11:49:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.647 11:49:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.905 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.906 [2024-11-04 11:49:31.357468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.906 "name": "raid_bdev1", 00:17:05.906 "aliases": [ 00:17:05.906 "f572bfac-35fd-4220-aa1b-c01b8965ebe0" 00:17:05.906 ], 00:17:05.906 "product_name": "Raid Volume", 00:17:05.906 "block_size": 512, 00:17:05.906 "num_blocks": 190464, 00:17:05.906 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:05.906 "assigned_rate_limits": { 00:17:05.906 "rw_ios_per_sec": 0, 00:17:05.906 "rw_mbytes_per_sec": 0, 00:17:05.906 "r_mbytes_per_sec": 0, 00:17:05.906 "w_mbytes_per_sec": 0 00:17:05.906 }, 00:17:05.906 "claimed": false, 00:17:05.906 "zoned": false, 00:17:05.906 "supported_io_types": { 00:17:05.906 "read": true, 00:17:05.906 "write": true, 00:17:05.906 "unmap": false, 00:17:05.906 "flush": false, 00:17:05.906 "reset": true, 00:17:05.906 "nvme_admin": false, 00:17:05.906 "nvme_io": false, 00:17:05.906 "nvme_io_md": false, 00:17:05.906 "write_zeroes": true, 00:17:05.906 "zcopy": false, 00:17:05.906 "get_zone_info": false, 00:17:05.906 "zone_management": false, 00:17:05.906 "zone_append": false, 00:17:05.906 "compare": false, 00:17:05.906 "compare_and_write": false, 00:17:05.906 "abort": false, 00:17:05.906 "seek_hole": false, 00:17:05.906 "seek_data": false, 00:17:05.906 "copy": false, 00:17:05.906 "nvme_iov_md": false 00:17:05.906 }, 00:17:05.906 "driver_specific": { 00:17:05.906 "raid": { 00:17:05.906 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:05.906 "strip_size_kb": 64, 00:17:05.906 "state": "online", 00:17:05.906 "raid_level": "raid5f", 00:17:05.906 "superblock": true, 00:17:05.906 "num_base_bdevs": 4, 00:17:05.906 "num_base_bdevs_discovered": 4, 00:17:05.906 "num_base_bdevs_operational": 4, 00:17:05.906 "base_bdevs_list": [ 00:17:05.906 { 00:17:05.906 "name": "pt1", 00:17:05.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.906 "is_configured": true, 00:17:05.906 "data_offset": 2048, 00:17:05.906 "data_size": 63488 00:17:05.906 }, 00:17:05.906 { 00:17:05.906 "name": "pt2", 00:17:05.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.906 "is_configured": true, 00:17:05.906 "data_offset": 2048, 00:17:05.906 "data_size": 63488 00:17:05.906 }, 00:17:05.906 { 00:17:05.906 "name": "pt3", 00:17:05.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.906 "is_configured": true, 00:17:05.906 "data_offset": 2048, 00:17:05.906 "data_size": 63488 00:17:05.906 }, 00:17:05.906 { 00:17:05.906 "name": "pt4", 00:17:05.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:05.906 "is_configured": true, 00:17:05.906 "data_offset": 2048, 00:17:05.906 "data_size": 63488 00:17:05.906 } 00:17:05.906 ] 00:17:05.906 } 00:17:05.906 } 00:17:05.906 }' 00:17:05.906 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.164 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:06.165 pt2 00:17:06.165 pt3 00:17:06.165 pt4' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.165 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.165 [2024-11-04 11:49:31.668955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f572bfac-35fd-4220-aa1b-c01b8965ebe0 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f572bfac-35fd-4220-aa1b-c01b8965ebe0 ']' 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.422 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.422 [2024-11-04 11:49:31.712644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.422 [2024-11-04 11:49:31.712711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.422 [2024-11-04 11:49:31.712820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.422 [2024-11-04 11:49:31.712949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.423 [2024-11-04 11:49:31.713008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 [2024-11-04 11:49:31.880379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.423 [2024-11-04 11:49:31.882271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.423 [2024-11-04 11:49:31.882361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:06.423 [2024-11-04 11:49:31.882423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:06.423 [2024-11-04 11:49:31.882511] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.423 [2024-11-04 11:49:31.882606] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.423 [2024-11-04 11:49:31.882669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:06.423 [2024-11-04 11:49:31.882715] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:06.423 [2024-11-04 11:49:31.882792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.423 [2024-11-04 11:49:31.882831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:06.423 request: 00:17:06.423 { 00:17:06.423 "name": "raid_bdev1", 00:17:06.423 "raid_level": "raid5f", 00:17:06.423 "base_bdevs": [ 00:17:06.423 "malloc1", 00:17:06.423 "malloc2", 00:17:06.423 "malloc3", 00:17:06.423 "malloc4" 00:17:06.423 ], 00:17:06.423 "strip_size_kb": 64, 00:17:06.423 "superblock": false, 00:17:06.423 "method": "bdev_raid_create", 00:17:06.423 "req_id": 1 00:17:06.423 } 00:17:06.423 Got JSON-RPC error response 00:17:06.423 response: 00:17:06.423 { 00:17:06.423 "code": -17, 00:17:06.423 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.423 } 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.423 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 [2024-11-04 11:49:31.948234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.682 [2024-11-04 11:49:31.948323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.682 [2024-11-04 11:49:31.948354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:06.682 [2024-11-04 11:49:31.948413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.682 [2024-11-04 11:49:31.950526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.682 [2024-11-04 11:49:31.950598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.682 [2024-11-04 11:49:31.950698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.682 [2024-11-04 11:49:31.950790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.682 pt1 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.682 11:49:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.682 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.682 "name": "raid_bdev1", 00:17:06.682 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:06.682 "strip_size_kb": 64, 00:17:06.682 "state": "configuring", 00:17:06.682 "raid_level": "raid5f", 00:17:06.682 "superblock": true, 00:17:06.683 "num_base_bdevs": 4, 00:17:06.683 "num_base_bdevs_discovered": 1, 00:17:06.683 "num_base_bdevs_operational": 4, 00:17:06.683 "base_bdevs_list": [ 00:17:06.683 { 00:17:06.683 "name": "pt1", 00:17:06.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.683 "is_configured": true, 00:17:06.683 "data_offset": 2048, 00:17:06.683 "data_size": 63488 00:17:06.683 }, 00:17:06.683 { 00:17:06.683 "name": null, 00:17:06.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.683 "is_configured": false, 00:17:06.683 "data_offset": 2048, 00:17:06.683 "data_size": 63488 00:17:06.683 }, 00:17:06.683 { 00:17:06.683 "name": null, 00:17:06.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.683 "is_configured": false, 00:17:06.683 "data_offset": 2048, 00:17:06.683 "data_size": 63488 00:17:06.683 }, 00:17:06.683 { 00:17:06.683 "name": null, 00:17:06.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.683 "is_configured": false, 00:17:06.683 "data_offset": 2048, 00:17:06.683 "data_size": 63488 00:17:06.683 } 00:17:06.683 ] 00:17:06.683 }' 00:17:06.683 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.683 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 [2024-11-04 11:49:32.347605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.940 [2024-11-04 11:49:32.347728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.940 [2024-11-04 11:49:32.347764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:06.940 [2024-11-04 11:49:32.347826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.940 [2024-11-04 11:49:32.348360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.940 [2024-11-04 11:49:32.348434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.940 [2024-11-04 11:49:32.348556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.940 [2024-11-04 11:49:32.348611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.940 pt2 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 [2024-11-04 11:49:32.359577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.940 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.940 "name": "raid_bdev1", 00:17:06.940 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:06.940 "strip_size_kb": 64, 00:17:06.940 "state": "configuring", 00:17:06.940 "raid_level": "raid5f", 00:17:06.940 "superblock": true, 00:17:06.940 "num_base_bdevs": 4, 00:17:06.940 "num_base_bdevs_discovered": 1, 00:17:06.940 "num_base_bdevs_operational": 4, 00:17:06.940 "base_bdevs_list": [ 00:17:06.940 { 00:17:06.940 "name": "pt1", 00:17:06.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.940 "is_configured": true, 00:17:06.940 "data_offset": 2048, 00:17:06.940 "data_size": 63488 00:17:06.940 }, 00:17:06.940 { 00:17:06.940 "name": null, 00:17:06.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.940 "is_configured": false, 00:17:06.940 "data_offset": 0, 00:17:06.940 "data_size": 63488 00:17:06.940 }, 00:17:06.940 { 00:17:06.940 "name": null, 00:17:06.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:06.941 "is_configured": false, 00:17:06.941 "data_offset": 2048, 00:17:06.941 "data_size": 63488 00:17:06.941 }, 00:17:06.941 { 00:17:06.941 "name": null, 00:17:06.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.941 "is_configured": false, 00:17:06.941 "data_offset": 2048, 00:17:06.941 "data_size": 63488 00:17:06.941 } 00:17:06.941 ] 00:17:06.941 }' 00:17:06.941 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.941 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.509 [2024-11-04 11:49:32.782871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.509 [2024-11-04 11:49:32.782979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.509 [2024-11-04 11:49:32.783017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:07.509 [2024-11-04 11:49:32.783077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.509 [2024-11-04 11:49:32.783637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.509 [2024-11-04 11:49:32.783696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.509 [2024-11-04 11:49:32.783823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.509 [2024-11-04 11:49:32.783876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.509 pt2 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.509 [2024-11-04 11:49:32.794814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:07.509 [2024-11-04 11:49:32.794902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.509 [2024-11-04 11:49:32.794923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:07.509 [2024-11-04 11:49:32.794932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.509 [2024-11-04 11:49:32.795338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.509 [2024-11-04 11:49:32.795355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:07.509 [2024-11-04 11:49:32.795445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:07.509 [2024-11-04 11:49:32.795466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.509 pt3 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.509 [2024-11-04 11:49:32.806763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.509 [2024-11-04 11:49:32.806853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.509 [2024-11-04 11:49:32.806925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:07.509 [2024-11-04 11:49:32.806956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.509 [2024-11-04 11:49:32.807422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.509 [2024-11-04 11:49:32.807488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.509 [2024-11-04 11:49:32.807594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:07.509 [2024-11-04 11:49:32.807642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.509 [2024-11-04 11:49:32.807821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:07.509 [2024-11-04 11:49:32.807859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:07.509 [2024-11-04 11:49:32.808127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:07.509 [2024-11-04 11:49:32.815014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:07.509 [2024-11-04 11:49:32.815072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:07.509 [2024-11-04 11:49:32.815319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.509 pt4 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.509 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.509 "name": "raid_bdev1", 00:17:07.510 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:07.510 "strip_size_kb": 64, 00:17:07.510 "state": "online", 00:17:07.510 "raid_level": "raid5f", 00:17:07.510 "superblock": true, 00:17:07.510 "num_base_bdevs": 4, 00:17:07.510 "num_base_bdevs_discovered": 4, 00:17:07.510 "num_base_bdevs_operational": 4, 00:17:07.510 "base_bdevs_list": [ 00:17:07.510 { 00:17:07.510 "name": "pt1", 00:17:07.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.510 "is_configured": true, 00:17:07.510 "data_offset": 2048, 00:17:07.510 "data_size": 63488 00:17:07.510 }, 00:17:07.510 { 00:17:07.510 "name": "pt2", 00:17:07.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.510 "is_configured": true, 00:17:07.510 "data_offset": 2048, 00:17:07.510 "data_size": 63488 00:17:07.510 }, 00:17:07.510 { 00:17:07.510 "name": "pt3", 00:17:07.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:07.510 "is_configured": true, 00:17:07.510 "data_offset": 2048, 00:17:07.510 "data_size": 63488 00:17:07.510 }, 00:17:07.510 { 00:17:07.510 "name": "pt4", 00:17:07.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:07.510 "is_configured": true, 00:17:07.510 "data_offset": 2048, 00:17:07.510 "data_size": 63488 00:17:07.510 } 00:17:07.510 ] 00:17:07.510 }' 00:17:07.510 11:49:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.510 11:49:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.076 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:08.076 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:08.076 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.076 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 [2024-11-04 11:49:33.303537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.077 "name": "raid_bdev1", 00:17:08.077 "aliases": [ 00:17:08.077 "f572bfac-35fd-4220-aa1b-c01b8965ebe0" 00:17:08.077 ], 00:17:08.077 "product_name": "Raid Volume", 00:17:08.077 "block_size": 512, 00:17:08.077 "num_blocks": 190464, 00:17:08.077 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:08.077 "assigned_rate_limits": { 00:17:08.077 "rw_ios_per_sec": 0, 00:17:08.077 "rw_mbytes_per_sec": 0, 00:17:08.077 "r_mbytes_per_sec": 0, 00:17:08.077 "w_mbytes_per_sec": 0 00:17:08.077 }, 00:17:08.077 "claimed": false, 00:17:08.077 "zoned": false, 00:17:08.077 "supported_io_types": { 00:17:08.077 "read": true, 00:17:08.077 "write": true, 00:17:08.077 "unmap": false, 00:17:08.077 "flush": false, 00:17:08.077 "reset": true, 00:17:08.077 "nvme_admin": false, 00:17:08.077 "nvme_io": false, 00:17:08.077 "nvme_io_md": false, 00:17:08.077 "write_zeroes": true, 00:17:08.077 "zcopy": false, 00:17:08.077 "get_zone_info": false, 00:17:08.077 "zone_management": false, 00:17:08.077 "zone_append": false, 00:17:08.077 "compare": false, 00:17:08.077 "compare_and_write": false, 00:17:08.077 "abort": false, 00:17:08.077 "seek_hole": false, 00:17:08.077 "seek_data": false, 00:17:08.077 "copy": false, 00:17:08.077 "nvme_iov_md": false 00:17:08.077 }, 00:17:08.077 "driver_specific": { 00:17:08.077 "raid": { 00:17:08.077 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:08.077 "strip_size_kb": 64, 00:17:08.077 "state": "online", 00:17:08.077 "raid_level": "raid5f", 00:17:08.077 "superblock": true, 00:17:08.077 "num_base_bdevs": 4, 00:17:08.077 "num_base_bdevs_discovered": 4, 00:17:08.077 "num_base_bdevs_operational": 4, 00:17:08.077 "base_bdevs_list": [ 00:17:08.077 { 00:17:08.077 "name": "pt1", 00:17:08.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.077 "is_configured": true, 00:17:08.077 "data_offset": 2048, 00:17:08.077 "data_size": 63488 00:17:08.077 }, 00:17:08.077 { 00:17:08.077 "name": "pt2", 00:17:08.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.077 "is_configured": true, 00:17:08.077 "data_offset": 2048, 00:17:08.077 "data_size": 63488 00:17:08.077 }, 00:17:08.077 { 00:17:08.077 "name": "pt3", 00:17:08.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.077 "is_configured": true, 00:17:08.077 "data_offset": 2048, 00:17:08.077 "data_size": 63488 00:17:08.077 }, 00:17:08.077 { 00:17:08.077 "name": "pt4", 00:17:08.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.077 "is_configured": true, 00:17:08.077 "data_offset": 2048, 00:17:08.077 "data_size": 63488 00:17:08.077 } 00:17:08.077 ] 00:17:08.077 } 00:17:08.077 } 00:17:08.077 }' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:08.077 pt2 00:17:08.077 pt3 00:17:08.077 pt4' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:08.337 [2024-11-04 11:49:33.606983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f572bfac-35fd-4220-aa1b-c01b8965ebe0 '!=' f572bfac-35fd-4220-aa1b-c01b8965ebe0 ']' 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.337 [2024-11-04 11:49:33.654746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.337 "name": "raid_bdev1", 00:17:08.337 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:08.337 "strip_size_kb": 64, 00:17:08.337 "state": "online", 00:17:08.337 "raid_level": "raid5f", 00:17:08.337 "superblock": true, 00:17:08.337 "num_base_bdevs": 4, 00:17:08.337 "num_base_bdevs_discovered": 3, 00:17:08.337 "num_base_bdevs_operational": 3, 00:17:08.337 "base_bdevs_list": [ 00:17:08.337 { 00:17:08.337 "name": null, 00:17:08.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.337 "is_configured": false, 00:17:08.337 "data_offset": 0, 00:17:08.337 "data_size": 63488 00:17:08.337 }, 00:17:08.337 { 00:17:08.337 "name": "pt2", 00:17:08.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.337 "is_configured": true, 00:17:08.337 "data_offset": 2048, 00:17:08.337 "data_size": 63488 00:17:08.337 }, 00:17:08.337 { 00:17:08.337 "name": "pt3", 00:17:08.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.337 "is_configured": true, 00:17:08.337 "data_offset": 2048, 00:17:08.337 "data_size": 63488 00:17:08.337 }, 00:17:08.337 { 00:17:08.337 "name": "pt4", 00:17:08.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.337 "is_configured": true, 00:17:08.337 "data_offset": 2048, 00:17:08.337 "data_size": 63488 00:17:08.337 } 00:17:08.337 ] 00:17:08.337 }' 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.337 11:49:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.910 [2024-11-04 11:49:34.137899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.910 [2024-11-04 11:49:34.137928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.910 [2024-11-04 11:49:34.138009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.910 [2024-11-04 11:49:34.138082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.910 [2024-11-04 11:49:34.138091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:08.910 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.911 [2024-11-04 11:49:34.237702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.911 [2024-11-04 11:49:34.237792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.911 [2024-11-04 11:49:34.237826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:08.911 [2024-11-04 11:49:34.237856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.911 [2024-11-04 11:49:34.239986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.911 [2024-11-04 11:49:34.240061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.911 [2024-11-04 11:49:34.240178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.911 [2024-11-04 11:49:34.240270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.911 pt2 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.911 "name": "raid_bdev1", 00:17:08.911 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:08.911 "strip_size_kb": 64, 00:17:08.911 "state": "configuring", 00:17:08.911 "raid_level": "raid5f", 00:17:08.911 "superblock": true, 00:17:08.911 "num_base_bdevs": 4, 00:17:08.911 "num_base_bdevs_discovered": 1, 00:17:08.911 "num_base_bdevs_operational": 3, 00:17:08.911 "base_bdevs_list": [ 00:17:08.911 { 00:17:08.911 "name": null, 00:17:08.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.911 "is_configured": false, 00:17:08.911 "data_offset": 2048, 00:17:08.911 "data_size": 63488 00:17:08.911 }, 00:17:08.911 { 00:17:08.911 "name": "pt2", 00:17:08.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.911 "is_configured": true, 00:17:08.911 "data_offset": 2048, 00:17:08.911 "data_size": 63488 00:17:08.911 }, 00:17:08.911 { 00:17:08.911 "name": null, 00:17:08.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:08.911 "is_configured": false, 00:17:08.911 "data_offset": 2048, 00:17:08.911 "data_size": 63488 00:17:08.911 }, 00:17:08.911 { 00:17:08.911 "name": null, 00:17:08.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:08.911 "is_configured": false, 00:17:08.911 "data_offset": 2048, 00:17:08.911 "data_size": 63488 00:17:08.911 } 00:17:08.911 ] 00:17:08.911 }' 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.911 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.181 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.441 [2024-11-04 11:49:34.708950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:09.441 [2024-11-04 11:49:34.709060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.441 [2024-11-04 11:49:34.709100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:09.441 [2024-11-04 11:49:34.709146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.441 [2024-11-04 11:49:34.709648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.441 [2024-11-04 11:49:34.709706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:09.441 [2024-11-04 11:49:34.709828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:09.441 [2024-11-04 11:49:34.709890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:09.441 pt3 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.441 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.442 "name": "raid_bdev1", 00:17:09.442 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:09.442 "strip_size_kb": 64, 00:17:09.442 "state": "configuring", 00:17:09.442 "raid_level": "raid5f", 00:17:09.442 "superblock": true, 00:17:09.442 "num_base_bdevs": 4, 00:17:09.442 "num_base_bdevs_discovered": 2, 00:17:09.442 "num_base_bdevs_operational": 3, 00:17:09.442 "base_bdevs_list": [ 00:17:09.442 { 00:17:09.442 "name": null, 00:17:09.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.442 "is_configured": false, 00:17:09.442 "data_offset": 2048, 00:17:09.442 "data_size": 63488 00:17:09.442 }, 00:17:09.442 { 00:17:09.442 "name": "pt2", 00:17:09.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.442 "is_configured": true, 00:17:09.442 "data_offset": 2048, 00:17:09.442 "data_size": 63488 00:17:09.442 }, 00:17:09.442 { 00:17:09.442 "name": "pt3", 00:17:09.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.442 "is_configured": true, 00:17:09.442 "data_offset": 2048, 00:17:09.442 "data_size": 63488 00:17:09.442 }, 00:17:09.442 { 00:17:09.442 "name": null, 00:17:09.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.442 "is_configured": false, 00:17:09.442 "data_offset": 2048, 00:17:09.442 "data_size": 63488 00:17:09.442 } 00:17:09.442 ] 00:17:09.442 }' 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.442 11:49:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.702 [2024-11-04 11:49:35.184130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:09.702 [2024-11-04 11:49:35.184234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.702 [2024-11-04 11:49:35.184273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:09.702 [2024-11-04 11:49:35.184333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.702 [2024-11-04 11:49:35.184835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.702 [2024-11-04 11:49:35.184892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:09.702 [2024-11-04 11:49:35.185014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:09.702 [2024-11-04 11:49:35.185069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:09.702 [2024-11-04 11:49:35.185251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.702 [2024-11-04 11:49:35.185290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.702 [2024-11-04 11:49:35.185587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:09.702 [2024-11-04 11:49:35.192679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.702 pt4 00:17:09.702 [2024-11-04 11:49:35.192740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:09.702 [2024-11-04 11:49:35.193062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.702 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.962 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.962 "name": "raid_bdev1", 00:17:09.962 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:09.962 "strip_size_kb": 64, 00:17:09.962 "state": "online", 00:17:09.962 "raid_level": "raid5f", 00:17:09.962 "superblock": true, 00:17:09.962 "num_base_bdevs": 4, 00:17:09.962 "num_base_bdevs_discovered": 3, 00:17:09.962 "num_base_bdevs_operational": 3, 00:17:09.962 "base_bdevs_list": [ 00:17:09.962 { 00:17:09.962 "name": null, 00:17:09.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.962 "is_configured": false, 00:17:09.962 "data_offset": 2048, 00:17:09.962 "data_size": 63488 00:17:09.962 }, 00:17:09.962 { 00:17:09.962 "name": "pt2", 00:17:09.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.962 "is_configured": true, 00:17:09.962 "data_offset": 2048, 00:17:09.962 "data_size": 63488 00:17:09.962 }, 00:17:09.962 { 00:17:09.962 "name": "pt3", 00:17:09.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:09.962 "is_configured": true, 00:17:09.962 "data_offset": 2048, 00:17:09.962 "data_size": 63488 00:17:09.962 }, 00:17:09.962 { 00:17:09.962 "name": "pt4", 00:17:09.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:09.962 "is_configured": true, 00:17:09.962 "data_offset": 2048, 00:17:09.962 "data_size": 63488 00:17:09.962 } 00:17:09.962 ] 00:17:09.962 }' 00:17:09.962 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.962 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.221 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 [2024-11-04 11:49:35.630305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.222 [2024-11-04 11:49:35.630381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.222 [2024-11-04 11:49:35.630534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.222 [2024-11-04 11:49:35.630654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.222 [2024-11-04 11:49:35.630711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 [2024-11-04 11:49:35.702179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.222 [2024-11-04 11:49:35.702304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.222 [2024-11-04 11:49:35.702379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:10.222 [2024-11-04 11:49:35.702448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.222 [2024-11-04 11:49:35.705014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.222 [2024-11-04 11:49:35.705100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.222 [2024-11-04 11:49:35.705247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:10.222 [2024-11-04 11:49:35.705357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.222 [2024-11-04 11:49:35.705570] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:10.222 [2024-11-04 11:49:35.705635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.222 [2024-11-04 11:49:35.705701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:10.222 [2024-11-04 11:49:35.705830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.222 [2024-11-04 11:49:35.706000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.222 pt1 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.222 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.481 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.481 "name": "raid_bdev1", 00:17:10.481 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:10.481 "strip_size_kb": 64, 00:17:10.481 "state": "configuring", 00:17:10.481 "raid_level": "raid5f", 00:17:10.481 "superblock": true, 00:17:10.481 "num_base_bdevs": 4, 00:17:10.481 "num_base_bdevs_discovered": 2, 00:17:10.481 "num_base_bdevs_operational": 3, 00:17:10.481 "base_bdevs_list": [ 00:17:10.481 { 00:17:10.481 "name": null, 00:17:10.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.481 "is_configured": false, 00:17:10.481 "data_offset": 2048, 00:17:10.481 "data_size": 63488 00:17:10.481 }, 00:17:10.481 { 00:17:10.481 "name": "pt2", 00:17:10.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.481 "is_configured": true, 00:17:10.481 "data_offset": 2048, 00:17:10.481 "data_size": 63488 00:17:10.481 }, 00:17:10.481 { 00:17:10.481 "name": "pt3", 00:17:10.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:10.481 "is_configured": true, 00:17:10.481 "data_offset": 2048, 00:17:10.481 "data_size": 63488 00:17:10.481 }, 00:17:10.481 { 00:17:10.481 "name": null, 00:17:10.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:10.481 "is_configured": false, 00:17:10.481 "data_offset": 2048, 00:17:10.481 "data_size": 63488 00:17:10.481 } 00:17:10.481 ] 00:17:10.481 }' 00:17:10.481 11:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.481 11:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.741 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.741 [2024-11-04 11:49:36.213389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:10.741 [2024-11-04 11:49:36.213537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.741 [2024-11-04 11:49:36.213584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:10.741 [2024-11-04 11:49:36.213619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.741 [2024-11-04 11:49:36.214152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.741 [2024-11-04 11:49:36.214221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:10.741 [2024-11-04 11:49:36.214336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:10.741 [2024-11-04 11:49:36.214372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:10.741 [2024-11-04 11:49:36.214556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:10.741 [2024-11-04 11:49:36.214567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:10.741 [2024-11-04 11:49:36.214838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:10.742 [2024-11-04 11:49:36.223048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:10.742 [2024-11-04 11:49:36.223111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:10.742 [2024-11-04 11:49:36.223491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.742 pt4 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.742 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.001 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.001 "name": "raid_bdev1", 00:17:11.001 "uuid": "f572bfac-35fd-4220-aa1b-c01b8965ebe0", 00:17:11.001 "strip_size_kb": 64, 00:17:11.001 "state": "online", 00:17:11.001 "raid_level": "raid5f", 00:17:11.001 "superblock": true, 00:17:11.001 "num_base_bdevs": 4, 00:17:11.001 "num_base_bdevs_discovered": 3, 00:17:11.001 "num_base_bdevs_operational": 3, 00:17:11.001 "base_bdevs_list": [ 00:17:11.001 { 00:17:11.001 "name": null, 00:17:11.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.001 "is_configured": false, 00:17:11.001 "data_offset": 2048, 00:17:11.001 "data_size": 63488 00:17:11.001 }, 00:17:11.001 { 00:17:11.001 "name": "pt2", 00:17:11.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.001 "is_configured": true, 00:17:11.001 "data_offset": 2048, 00:17:11.001 "data_size": 63488 00:17:11.001 }, 00:17:11.001 { 00:17:11.001 "name": "pt3", 00:17:11.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:11.001 "is_configured": true, 00:17:11.001 "data_offset": 2048, 00:17:11.001 "data_size": 63488 00:17:11.001 }, 00:17:11.001 { 00:17:11.001 "name": "pt4", 00:17:11.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:11.001 "is_configured": true, 00:17:11.001 "data_offset": 2048, 00:17:11.001 "data_size": 63488 00:17:11.001 } 00:17:11.001 ] 00:17:11.001 }' 00:17:11.001 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.001 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:11.260 [2024-11-04 11:49:36.709550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f572bfac-35fd-4220-aa1b-c01b8965ebe0 '!=' f572bfac-35fd-4220-aa1b-c01b8965ebe0 ']' 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84350 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84350 ']' 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84350 00:17:11.260 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:11.261 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.261 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84350 00:17:11.520 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.520 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.520 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84350' 00:17:11.520 killing process with pid 84350 00:17:11.520 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84350 00:17:11.520 [2024-11-04 11:49:36.794575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.520 [2024-11-04 11:49:36.794679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.520 11:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84350 00:17:11.520 [2024-11-04 11:49:36.794824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.520 [2024-11-04 11:49:36.794842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:11.779 [2024-11-04 11:49:37.192125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.163 11:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:13.163 00:17:13.163 real 0m8.543s 00:17:13.163 user 0m13.503s 00:17:13.163 sys 0m1.485s 00:17:13.163 11:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:13.163 ************************************ 00:17:13.163 END TEST raid5f_superblock_test 00:17:13.163 ************************************ 00:17:13.163 11:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 11:49:38 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:13.163 11:49:38 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:13.163 11:49:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:13.163 11:49:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:13.163 11:49:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.163 ************************************ 00:17:13.163 START TEST raid5f_rebuild_test 00:17:13.163 ************************************ 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.163 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84839 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84839 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84839 ']' 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:13.164 11:49:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.164 [2024-11-04 11:49:38.454826] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:17:13.164 [2024-11-04 11:49:38.455037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84839 ] 00:17:13.164 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:13.164 Zero copy mechanism will not be used. 00:17:13.164 [2024-11-04 11:49:38.626561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.423 [2024-11-04 11:49:38.739543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.423 [2024-11-04 11:49:38.936928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.423 [2024-11-04 11:49:38.937072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.016 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 BaseBdev1_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 [2024-11-04 11:49:39.336814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:14.017 [2024-11-04 11:49:39.336924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.017 [2024-11-04 11:49:39.336965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.017 [2024-11-04 11:49:39.337040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.017 [2024-11-04 11:49:39.339065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.017 [2024-11-04 11:49:39.339137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.017 BaseBdev1 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 BaseBdev2_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 [2024-11-04 11:49:39.390707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:14.017 [2024-11-04 11:49:39.390821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.017 [2024-11-04 11:49:39.390856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.017 [2024-11-04 11:49:39.390890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.017 [2024-11-04 11:49:39.392932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.017 [2024-11-04 11:49:39.393007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.017 BaseBdev2 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 BaseBdev3_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 [2024-11-04 11:49:39.458234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:14.017 [2024-11-04 11:49:39.458343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.017 [2024-11-04 11:49:39.458403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:14.017 [2024-11-04 11:49:39.458443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.017 [2024-11-04 11:49:39.460438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.017 [2024-11-04 11:49:39.460512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:14.017 BaseBdev3 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 BaseBdev4_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.017 [2024-11-04 11:49:39.510069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:14.017 [2024-11-04 11:49:39.510163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.017 [2024-11-04 11:49:39.510197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:14.017 [2024-11-04 11:49:39.510225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.017 [2024-11-04 11:49:39.512269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.017 [2024-11-04 11:49:39.512348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:14.017 BaseBdev4 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.017 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.276 spare_malloc 00:17:14.276 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.276 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.294 spare_delay 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.294 [2024-11-04 11:49:39.576497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.294 [2024-11-04 11:49:39.576599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.294 [2024-11-04 11:49:39.576638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.294 [2024-11-04 11:49:39.576670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.294 [2024-11-04 11:49:39.578776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.294 [2024-11-04 11:49:39.578848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.294 spare 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.294 [2024-11-04 11:49:39.588529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.294 [2024-11-04 11:49:39.590363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.294 [2024-11-04 11:49:39.590482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.294 [2024-11-04 11:49:39.590561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:14.294 [2024-11-04 11:49:39.590684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.294 [2024-11-04 11:49:39.590735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:14.294 [2024-11-04 11:49:39.591014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:14.294 [2024-11-04 11:49:39.598418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.294 [2024-11-04 11:49:39.598471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.294 [2024-11-04 11:49:39.598737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.294 "name": "raid_bdev1", 00:17:14.294 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:14.294 "strip_size_kb": 64, 00:17:14.294 "state": "online", 00:17:14.294 "raid_level": "raid5f", 00:17:14.294 "superblock": false, 00:17:14.294 "num_base_bdevs": 4, 00:17:14.294 "num_base_bdevs_discovered": 4, 00:17:14.294 "num_base_bdevs_operational": 4, 00:17:14.294 "base_bdevs_list": [ 00:17:14.294 { 00:17:14.294 "name": "BaseBdev1", 00:17:14.294 "uuid": "d0793df5-6882-5921-94df-b47544a45de6", 00:17:14.294 "is_configured": true, 00:17:14.294 "data_offset": 0, 00:17:14.294 "data_size": 65536 00:17:14.294 }, 00:17:14.294 { 00:17:14.294 "name": "BaseBdev2", 00:17:14.294 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:14.294 "is_configured": true, 00:17:14.294 "data_offset": 0, 00:17:14.294 "data_size": 65536 00:17:14.294 }, 00:17:14.294 { 00:17:14.294 "name": "BaseBdev3", 00:17:14.294 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:14.294 "is_configured": true, 00:17:14.294 "data_offset": 0, 00:17:14.294 "data_size": 65536 00:17:14.294 }, 00:17:14.294 { 00:17:14.294 "name": "BaseBdev4", 00:17:14.294 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:14.294 "is_configured": true, 00:17:14.294 "data_offset": 0, 00:17:14.294 "data_size": 65536 00:17:14.294 } 00:17:14.294 ] 00:17:14.294 }' 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.294 11:49:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.553 [2024-11-04 11:49:40.034878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:14.553 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.813 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:14.813 [2024-11-04 11:49:40.310239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:15.073 /dev/nbd0 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.073 1+0 records in 00:17:15.073 1+0 records out 00:17:15.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229807 s, 17.8 MB/s 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:15.073 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:15.643 512+0 records in 00:17:15.643 512+0 records out 00:17:15.643 100663296 bytes (101 MB, 96 MiB) copied, 0.478169 s, 211 MB/s 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:15.643 11:49:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.643 [2024-11-04 11:49:41.056470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.643 [2024-11-04 11:49:41.091597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.643 "name": "raid_bdev1", 00:17:15.643 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:15.643 "strip_size_kb": 64, 00:17:15.643 "state": "online", 00:17:15.643 "raid_level": "raid5f", 00:17:15.643 "superblock": false, 00:17:15.643 "num_base_bdevs": 4, 00:17:15.643 "num_base_bdevs_discovered": 3, 00:17:15.643 "num_base_bdevs_operational": 3, 00:17:15.643 "base_bdevs_list": [ 00:17:15.643 { 00:17:15.643 "name": null, 00:17:15.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.643 "is_configured": false, 00:17:15.643 "data_offset": 0, 00:17:15.643 "data_size": 65536 00:17:15.643 }, 00:17:15.643 { 00:17:15.643 "name": "BaseBdev2", 00:17:15.643 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:15.643 "is_configured": true, 00:17:15.643 "data_offset": 0, 00:17:15.643 "data_size": 65536 00:17:15.643 }, 00:17:15.643 { 00:17:15.643 "name": "BaseBdev3", 00:17:15.643 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:15.643 "is_configured": true, 00:17:15.643 "data_offset": 0, 00:17:15.643 "data_size": 65536 00:17:15.643 }, 00:17:15.643 { 00:17:15.643 "name": "BaseBdev4", 00:17:15.643 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:15.643 "is_configured": true, 00:17:15.643 "data_offset": 0, 00:17:15.643 "data_size": 65536 00:17:15.643 } 00:17:15.643 ] 00:17:15.643 }' 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.643 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.212 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.212 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.212 [2024-11-04 11:49:41.534834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.212 [2024-11-04 11:49:41.550734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:16.212 11:49:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.212 11:49:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:16.212 [2024-11-04 11:49:41.559994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.151 "name": "raid_bdev1", 00:17:17.151 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:17.151 "strip_size_kb": 64, 00:17:17.151 "state": "online", 00:17:17.151 "raid_level": "raid5f", 00:17:17.151 "superblock": false, 00:17:17.151 "num_base_bdevs": 4, 00:17:17.151 "num_base_bdevs_discovered": 4, 00:17:17.151 "num_base_bdevs_operational": 4, 00:17:17.151 "process": { 00:17:17.151 "type": "rebuild", 00:17:17.151 "target": "spare", 00:17:17.151 "progress": { 00:17:17.151 "blocks": 19200, 00:17:17.151 "percent": 9 00:17:17.151 } 00:17:17.151 }, 00:17:17.151 "base_bdevs_list": [ 00:17:17.151 { 00:17:17.151 "name": "spare", 00:17:17.151 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:17.151 "is_configured": true, 00:17:17.151 "data_offset": 0, 00:17:17.151 "data_size": 65536 00:17:17.151 }, 00:17:17.151 { 00:17:17.151 "name": "BaseBdev2", 00:17:17.151 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:17.151 "is_configured": true, 00:17:17.151 "data_offset": 0, 00:17:17.151 "data_size": 65536 00:17:17.151 }, 00:17:17.151 { 00:17:17.151 "name": "BaseBdev3", 00:17:17.151 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:17.151 "is_configured": true, 00:17:17.151 "data_offset": 0, 00:17:17.151 "data_size": 65536 00:17:17.151 }, 00:17:17.151 { 00:17:17.151 "name": "BaseBdev4", 00:17:17.151 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:17.151 "is_configured": true, 00:17:17.151 "data_offset": 0, 00:17:17.151 "data_size": 65536 00:17:17.151 } 00:17:17.151 ] 00:17:17.151 }' 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.151 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.411 [2024-11-04 11:49:42.715096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.411 [2024-11-04 11:49:42.767130] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.411 [2024-11-04 11:49:42.767249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.411 [2024-11-04 11:49:42.767290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.411 [2024-11-04 11:49:42.767315] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.411 "name": "raid_bdev1", 00:17:17.411 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:17.411 "strip_size_kb": 64, 00:17:17.411 "state": "online", 00:17:17.411 "raid_level": "raid5f", 00:17:17.411 "superblock": false, 00:17:17.411 "num_base_bdevs": 4, 00:17:17.411 "num_base_bdevs_discovered": 3, 00:17:17.411 "num_base_bdevs_operational": 3, 00:17:17.411 "base_bdevs_list": [ 00:17:17.411 { 00:17:17.411 "name": null, 00:17:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.411 "is_configured": false, 00:17:17.411 "data_offset": 0, 00:17:17.411 "data_size": 65536 00:17:17.411 }, 00:17:17.411 { 00:17:17.411 "name": "BaseBdev2", 00:17:17.411 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:17.411 "is_configured": true, 00:17:17.411 "data_offset": 0, 00:17:17.411 "data_size": 65536 00:17:17.411 }, 00:17:17.411 { 00:17:17.411 "name": "BaseBdev3", 00:17:17.411 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:17.411 "is_configured": true, 00:17:17.411 "data_offset": 0, 00:17:17.411 "data_size": 65536 00:17:17.411 }, 00:17:17.411 { 00:17:17.411 "name": "BaseBdev4", 00:17:17.411 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:17.411 "is_configured": true, 00:17:17.411 "data_offset": 0, 00:17:17.411 "data_size": 65536 00:17:17.411 } 00:17:17.411 ] 00:17:17.411 }' 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.411 11:49:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.979 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.979 "name": "raid_bdev1", 00:17:17.979 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:17.979 "strip_size_kb": 64, 00:17:17.979 "state": "online", 00:17:17.979 "raid_level": "raid5f", 00:17:17.979 "superblock": false, 00:17:17.979 "num_base_bdevs": 4, 00:17:17.979 "num_base_bdevs_discovered": 3, 00:17:17.979 "num_base_bdevs_operational": 3, 00:17:17.979 "base_bdevs_list": [ 00:17:17.979 { 00:17:17.979 "name": null, 00:17:17.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.979 "is_configured": false, 00:17:17.979 "data_offset": 0, 00:17:17.980 "data_size": 65536 00:17:17.980 }, 00:17:17.980 { 00:17:17.980 "name": "BaseBdev2", 00:17:17.980 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:17.980 "is_configured": true, 00:17:17.980 "data_offset": 0, 00:17:17.980 "data_size": 65536 00:17:17.980 }, 00:17:17.980 { 00:17:17.980 "name": "BaseBdev3", 00:17:17.980 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:17.980 "is_configured": true, 00:17:17.980 "data_offset": 0, 00:17:17.980 "data_size": 65536 00:17:17.980 }, 00:17:17.980 { 00:17:17.980 "name": "BaseBdev4", 00:17:17.980 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:17.980 "is_configured": true, 00:17:17.980 "data_offset": 0, 00:17:17.980 "data_size": 65536 00:17:17.980 } 00:17:17.980 ] 00:17:17.980 }' 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.980 [2024-11-04 11:49:43.372827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.980 [2024-11-04 11:49:43.388605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.980 11:49:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:17.980 [2024-11-04 11:49:43.398250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.916 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.176 "name": "raid_bdev1", 00:17:19.176 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:19.176 "strip_size_kb": 64, 00:17:19.176 "state": "online", 00:17:19.176 "raid_level": "raid5f", 00:17:19.176 "superblock": false, 00:17:19.176 "num_base_bdevs": 4, 00:17:19.176 "num_base_bdevs_discovered": 4, 00:17:19.176 "num_base_bdevs_operational": 4, 00:17:19.176 "process": { 00:17:19.176 "type": "rebuild", 00:17:19.176 "target": "spare", 00:17:19.176 "progress": { 00:17:19.176 "blocks": 19200, 00:17:19.176 "percent": 9 00:17:19.176 } 00:17:19.176 }, 00:17:19.176 "base_bdevs_list": [ 00:17:19.176 { 00:17:19.176 "name": "spare", 00:17:19.176 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev2", 00:17:19.176 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev3", 00:17:19.176 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev4", 00:17:19.176 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 } 00:17:19.176 ] 00:17:19.176 }' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=626 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.176 "name": "raid_bdev1", 00:17:19.176 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:19.176 "strip_size_kb": 64, 00:17:19.176 "state": "online", 00:17:19.176 "raid_level": "raid5f", 00:17:19.176 "superblock": false, 00:17:19.176 "num_base_bdevs": 4, 00:17:19.176 "num_base_bdevs_discovered": 4, 00:17:19.176 "num_base_bdevs_operational": 4, 00:17:19.176 "process": { 00:17:19.176 "type": "rebuild", 00:17:19.176 "target": "spare", 00:17:19.176 "progress": { 00:17:19.176 "blocks": 21120, 00:17:19.176 "percent": 10 00:17:19.176 } 00:17:19.176 }, 00:17:19.176 "base_bdevs_list": [ 00:17:19.176 { 00:17:19.176 "name": "spare", 00:17:19.176 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev2", 00:17:19.176 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev3", 00:17:19.176 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev4", 00:17:19.176 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 65536 00:17:19.176 } 00:17:19.176 ] 00:17:19.176 }' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.176 11:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.567 "name": "raid_bdev1", 00:17:20.567 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:20.567 "strip_size_kb": 64, 00:17:20.567 "state": "online", 00:17:20.567 "raid_level": "raid5f", 00:17:20.567 "superblock": false, 00:17:20.567 "num_base_bdevs": 4, 00:17:20.567 "num_base_bdevs_discovered": 4, 00:17:20.567 "num_base_bdevs_operational": 4, 00:17:20.567 "process": { 00:17:20.567 "type": "rebuild", 00:17:20.567 "target": "spare", 00:17:20.567 "progress": { 00:17:20.567 "blocks": 42240, 00:17:20.567 "percent": 21 00:17:20.567 } 00:17:20.567 }, 00:17:20.567 "base_bdevs_list": [ 00:17:20.567 { 00:17:20.567 "name": "spare", 00:17:20.567 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:20.567 "is_configured": true, 00:17:20.567 "data_offset": 0, 00:17:20.567 "data_size": 65536 00:17:20.567 }, 00:17:20.567 { 00:17:20.567 "name": "BaseBdev2", 00:17:20.567 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:20.567 "is_configured": true, 00:17:20.567 "data_offset": 0, 00:17:20.567 "data_size": 65536 00:17:20.567 }, 00:17:20.567 { 00:17:20.567 "name": "BaseBdev3", 00:17:20.567 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:20.567 "is_configured": true, 00:17:20.567 "data_offset": 0, 00:17:20.567 "data_size": 65536 00:17:20.567 }, 00:17:20.567 { 00:17:20.567 "name": "BaseBdev4", 00:17:20.567 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:20.567 "is_configured": true, 00:17:20.567 "data_offset": 0, 00:17:20.567 "data_size": 65536 00:17:20.567 } 00:17:20.567 ] 00:17:20.567 }' 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.567 11:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.527 "name": "raid_bdev1", 00:17:21.527 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:21.527 "strip_size_kb": 64, 00:17:21.527 "state": "online", 00:17:21.527 "raid_level": "raid5f", 00:17:21.527 "superblock": false, 00:17:21.527 "num_base_bdevs": 4, 00:17:21.527 "num_base_bdevs_discovered": 4, 00:17:21.527 "num_base_bdevs_operational": 4, 00:17:21.527 "process": { 00:17:21.527 "type": "rebuild", 00:17:21.527 "target": "spare", 00:17:21.527 "progress": { 00:17:21.527 "blocks": 65280, 00:17:21.527 "percent": 33 00:17:21.527 } 00:17:21.527 }, 00:17:21.527 "base_bdevs_list": [ 00:17:21.527 { 00:17:21.527 "name": "spare", 00:17:21.527 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:21.527 "is_configured": true, 00:17:21.527 "data_offset": 0, 00:17:21.527 "data_size": 65536 00:17:21.527 }, 00:17:21.527 { 00:17:21.527 "name": "BaseBdev2", 00:17:21.527 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:21.527 "is_configured": true, 00:17:21.527 "data_offset": 0, 00:17:21.527 "data_size": 65536 00:17:21.527 }, 00:17:21.527 { 00:17:21.527 "name": "BaseBdev3", 00:17:21.527 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:21.527 "is_configured": true, 00:17:21.527 "data_offset": 0, 00:17:21.527 "data_size": 65536 00:17:21.527 }, 00:17:21.527 { 00:17:21.527 "name": "BaseBdev4", 00:17:21.527 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:21.527 "is_configured": true, 00:17:21.527 "data_offset": 0, 00:17:21.527 "data_size": 65536 00:17:21.527 } 00:17:21.527 ] 00:17:21.527 }' 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.527 11:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.463 11:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.780 "name": "raid_bdev1", 00:17:22.780 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:22.780 "strip_size_kb": 64, 00:17:22.780 "state": "online", 00:17:22.780 "raid_level": "raid5f", 00:17:22.780 "superblock": false, 00:17:22.780 "num_base_bdevs": 4, 00:17:22.780 "num_base_bdevs_discovered": 4, 00:17:22.780 "num_base_bdevs_operational": 4, 00:17:22.780 "process": { 00:17:22.780 "type": "rebuild", 00:17:22.780 "target": "spare", 00:17:22.780 "progress": { 00:17:22.780 "blocks": 86400, 00:17:22.780 "percent": 43 00:17:22.780 } 00:17:22.780 }, 00:17:22.780 "base_bdevs_list": [ 00:17:22.780 { 00:17:22.780 "name": "spare", 00:17:22.780 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:22.780 "is_configured": true, 00:17:22.780 "data_offset": 0, 00:17:22.780 "data_size": 65536 00:17:22.780 }, 00:17:22.780 { 00:17:22.780 "name": "BaseBdev2", 00:17:22.780 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:22.780 "is_configured": true, 00:17:22.780 "data_offset": 0, 00:17:22.780 "data_size": 65536 00:17:22.780 }, 00:17:22.780 { 00:17:22.780 "name": "BaseBdev3", 00:17:22.780 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:22.780 "is_configured": true, 00:17:22.780 "data_offset": 0, 00:17:22.780 "data_size": 65536 00:17:22.780 }, 00:17:22.780 { 00:17:22.780 "name": "BaseBdev4", 00:17:22.780 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:22.780 "is_configured": true, 00:17:22.780 "data_offset": 0, 00:17:22.780 "data_size": 65536 00:17:22.780 } 00:17:22.780 ] 00:17:22.780 }' 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.780 11:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.717 "name": "raid_bdev1", 00:17:23.717 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:23.717 "strip_size_kb": 64, 00:17:23.717 "state": "online", 00:17:23.717 "raid_level": "raid5f", 00:17:23.717 "superblock": false, 00:17:23.717 "num_base_bdevs": 4, 00:17:23.717 "num_base_bdevs_discovered": 4, 00:17:23.717 "num_base_bdevs_operational": 4, 00:17:23.717 "process": { 00:17:23.717 "type": "rebuild", 00:17:23.717 "target": "spare", 00:17:23.717 "progress": { 00:17:23.717 "blocks": 107520, 00:17:23.717 "percent": 54 00:17:23.717 } 00:17:23.717 }, 00:17:23.717 "base_bdevs_list": [ 00:17:23.717 { 00:17:23.717 "name": "spare", 00:17:23.717 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:23.717 "is_configured": true, 00:17:23.717 "data_offset": 0, 00:17:23.717 "data_size": 65536 00:17:23.717 }, 00:17:23.717 { 00:17:23.717 "name": "BaseBdev2", 00:17:23.717 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:23.717 "is_configured": true, 00:17:23.717 "data_offset": 0, 00:17:23.717 "data_size": 65536 00:17:23.717 }, 00:17:23.717 { 00:17:23.717 "name": "BaseBdev3", 00:17:23.717 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:23.717 "is_configured": true, 00:17:23.717 "data_offset": 0, 00:17:23.717 "data_size": 65536 00:17:23.717 }, 00:17:23.717 { 00:17:23.717 "name": "BaseBdev4", 00:17:23.717 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:23.717 "is_configured": true, 00:17:23.717 "data_offset": 0, 00:17:23.717 "data_size": 65536 00:17:23.717 } 00:17:23.717 ] 00:17:23.717 }' 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.717 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.976 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.976 11:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.912 "name": "raid_bdev1", 00:17:24.912 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:24.912 "strip_size_kb": 64, 00:17:24.912 "state": "online", 00:17:24.912 "raid_level": "raid5f", 00:17:24.912 "superblock": false, 00:17:24.912 "num_base_bdevs": 4, 00:17:24.912 "num_base_bdevs_discovered": 4, 00:17:24.912 "num_base_bdevs_operational": 4, 00:17:24.912 "process": { 00:17:24.912 "type": "rebuild", 00:17:24.912 "target": "spare", 00:17:24.912 "progress": { 00:17:24.912 "blocks": 130560, 00:17:24.912 "percent": 66 00:17:24.912 } 00:17:24.912 }, 00:17:24.912 "base_bdevs_list": [ 00:17:24.912 { 00:17:24.912 "name": "spare", 00:17:24.912 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:24.912 "is_configured": true, 00:17:24.912 "data_offset": 0, 00:17:24.912 "data_size": 65536 00:17:24.912 }, 00:17:24.912 { 00:17:24.912 "name": "BaseBdev2", 00:17:24.912 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:24.912 "is_configured": true, 00:17:24.912 "data_offset": 0, 00:17:24.912 "data_size": 65536 00:17:24.912 }, 00:17:24.912 { 00:17:24.912 "name": "BaseBdev3", 00:17:24.912 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:24.912 "is_configured": true, 00:17:24.912 "data_offset": 0, 00:17:24.912 "data_size": 65536 00:17:24.912 }, 00:17:24.912 { 00:17:24.912 "name": "BaseBdev4", 00:17:24.912 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:24.912 "is_configured": true, 00:17:24.912 "data_offset": 0, 00:17:24.912 "data_size": 65536 00:17:24.912 } 00:17:24.912 ] 00:17:24.912 }' 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.912 11:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.318 "name": "raid_bdev1", 00:17:26.318 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:26.318 "strip_size_kb": 64, 00:17:26.318 "state": "online", 00:17:26.318 "raid_level": "raid5f", 00:17:26.318 "superblock": false, 00:17:26.318 "num_base_bdevs": 4, 00:17:26.318 "num_base_bdevs_discovered": 4, 00:17:26.318 "num_base_bdevs_operational": 4, 00:17:26.318 "process": { 00:17:26.318 "type": "rebuild", 00:17:26.318 "target": "spare", 00:17:26.318 "progress": { 00:17:26.318 "blocks": 151680, 00:17:26.318 "percent": 77 00:17:26.318 } 00:17:26.318 }, 00:17:26.318 "base_bdevs_list": [ 00:17:26.318 { 00:17:26.318 "name": "spare", 00:17:26.318 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:26.318 "is_configured": true, 00:17:26.318 "data_offset": 0, 00:17:26.318 "data_size": 65536 00:17:26.318 }, 00:17:26.318 { 00:17:26.318 "name": "BaseBdev2", 00:17:26.318 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:26.318 "is_configured": true, 00:17:26.318 "data_offset": 0, 00:17:26.318 "data_size": 65536 00:17:26.318 }, 00:17:26.318 { 00:17:26.318 "name": "BaseBdev3", 00:17:26.318 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:26.318 "is_configured": true, 00:17:26.318 "data_offset": 0, 00:17:26.318 "data_size": 65536 00:17:26.318 }, 00:17:26.318 { 00:17:26.318 "name": "BaseBdev4", 00:17:26.318 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:26.318 "is_configured": true, 00:17:26.318 "data_offset": 0, 00:17:26.318 "data_size": 65536 00:17:26.318 } 00:17:26.318 ] 00:17:26.318 }' 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.318 11:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.252 "name": "raid_bdev1", 00:17:27.252 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:27.252 "strip_size_kb": 64, 00:17:27.252 "state": "online", 00:17:27.252 "raid_level": "raid5f", 00:17:27.252 "superblock": false, 00:17:27.252 "num_base_bdevs": 4, 00:17:27.252 "num_base_bdevs_discovered": 4, 00:17:27.252 "num_base_bdevs_operational": 4, 00:17:27.252 "process": { 00:17:27.252 "type": "rebuild", 00:17:27.252 "target": "spare", 00:17:27.252 "progress": { 00:17:27.252 "blocks": 174720, 00:17:27.252 "percent": 88 00:17:27.252 } 00:17:27.252 }, 00:17:27.252 "base_bdevs_list": [ 00:17:27.252 { 00:17:27.252 "name": "spare", 00:17:27.252 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:27.252 "is_configured": true, 00:17:27.252 "data_offset": 0, 00:17:27.252 "data_size": 65536 00:17:27.252 }, 00:17:27.252 { 00:17:27.252 "name": "BaseBdev2", 00:17:27.252 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:27.252 "is_configured": true, 00:17:27.252 "data_offset": 0, 00:17:27.252 "data_size": 65536 00:17:27.252 }, 00:17:27.252 { 00:17:27.252 "name": "BaseBdev3", 00:17:27.252 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:27.252 "is_configured": true, 00:17:27.252 "data_offset": 0, 00:17:27.252 "data_size": 65536 00:17:27.252 }, 00:17:27.252 { 00:17:27.252 "name": "BaseBdev4", 00:17:27.252 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:27.252 "is_configured": true, 00:17:27.252 "data_offset": 0, 00:17:27.252 "data_size": 65536 00:17:27.252 } 00:17:27.252 ] 00:17:27.252 }' 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.252 11:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.627 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.627 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.627 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.627 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.628 [2024-11-04 11:49:53.762815] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:28.628 [2024-11-04 11:49:53.762937] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:28.628 [2024-11-04 11:49:53.763015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.628 "name": "raid_bdev1", 00:17:28.628 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:28.628 "strip_size_kb": 64, 00:17:28.628 "state": "online", 00:17:28.628 "raid_level": "raid5f", 00:17:28.628 "superblock": false, 00:17:28.628 "num_base_bdevs": 4, 00:17:28.628 "num_base_bdevs_discovered": 4, 00:17:28.628 "num_base_bdevs_operational": 4, 00:17:28.628 "process": { 00:17:28.628 "type": "rebuild", 00:17:28.628 "target": "spare", 00:17:28.628 "progress": { 00:17:28.628 "blocks": 195840, 00:17:28.628 "percent": 99 00:17:28.628 } 00:17:28.628 }, 00:17:28.628 "base_bdevs_list": [ 00:17:28.628 { 00:17:28.628 "name": "spare", 00:17:28.628 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:28.628 "is_configured": true, 00:17:28.628 "data_offset": 0, 00:17:28.628 "data_size": 65536 00:17:28.628 }, 00:17:28.628 { 00:17:28.628 "name": "BaseBdev2", 00:17:28.628 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:28.628 "is_configured": true, 00:17:28.628 "data_offset": 0, 00:17:28.628 "data_size": 65536 00:17:28.628 }, 00:17:28.628 { 00:17:28.628 "name": "BaseBdev3", 00:17:28.628 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:28.628 "is_configured": true, 00:17:28.628 "data_offset": 0, 00:17:28.628 "data_size": 65536 00:17:28.628 }, 00:17:28.628 { 00:17:28.628 "name": "BaseBdev4", 00:17:28.628 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:28.628 "is_configured": true, 00:17:28.628 "data_offset": 0, 00:17:28.628 "data_size": 65536 00:17:28.628 } 00:17:28.628 ] 00:17:28.628 }' 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.628 11:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.565 "name": "raid_bdev1", 00:17:29.565 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:29.565 "strip_size_kb": 64, 00:17:29.565 "state": "online", 00:17:29.565 "raid_level": "raid5f", 00:17:29.565 "superblock": false, 00:17:29.565 "num_base_bdevs": 4, 00:17:29.565 "num_base_bdevs_discovered": 4, 00:17:29.565 "num_base_bdevs_operational": 4, 00:17:29.565 "base_bdevs_list": [ 00:17:29.565 { 00:17:29.565 "name": "spare", 00:17:29.565 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev2", 00:17:29.565 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev3", 00:17:29.565 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev4", 00:17:29.565 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 } 00:17:29.565 ] 00:17:29.565 }' 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:29.565 11:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.565 "name": "raid_bdev1", 00:17:29.565 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:29.565 "strip_size_kb": 64, 00:17:29.565 "state": "online", 00:17:29.565 "raid_level": "raid5f", 00:17:29.565 "superblock": false, 00:17:29.565 "num_base_bdevs": 4, 00:17:29.565 "num_base_bdevs_discovered": 4, 00:17:29.565 "num_base_bdevs_operational": 4, 00:17:29.565 "base_bdevs_list": [ 00:17:29.565 { 00:17:29.565 "name": "spare", 00:17:29.565 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev2", 00:17:29.565 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev3", 00:17:29.565 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 }, 00:17:29.565 { 00:17:29.565 "name": "BaseBdev4", 00:17:29.565 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:29.565 "is_configured": true, 00:17:29.565 "data_offset": 0, 00:17:29.565 "data_size": 65536 00:17:29.565 } 00:17:29.565 ] 00:17:29.565 }' 00:17:29.565 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.825 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.825 "name": "raid_bdev1", 00:17:29.825 "uuid": "b3151f5f-5680-429f-8b82-84b9304b3d75", 00:17:29.825 "strip_size_kb": 64, 00:17:29.825 "state": "online", 00:17:29.825 "raid_level": "raid5f", 00:17:29.825 "superblock": false, 00:17:29.825 "num_base_bdevs": 4, 00:17:29.825 "num_base_bdevs_discovered": 4, 00:17:29.825 "num_base_bdevs_operational": 4, 00:17:29.825 "base_bdevs_list": [ 00:17:29.825 { 00:17:29.825 "name": "spare", 00:17:29.825 "uuid": "71cf45a3-cd52-5a53-8743-18492838d690", 00:17:29.825 "is_configured": true, 00:17:29.825 "data_offset": 0, 00:17:29.825 "data_size": 65536 00:17:29.825 }, 00:17:29.825 { 00:17:29.825 "name": "BaseBdev2", 00:17:29.825 "uuid": "d462b6eb-52a3-5e1d-8362-11222f20ef91", 00:17:29.825 "is_configured": true, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 65536 00:17:29.826 }, 00:17:29.826 { 00:17:29.826 "name": "BaseBdev3", 00:17:29.826 "uuid": "3863d9c1-5d20-55db-a396-5d1fa180cb84", 00:17:29.826 "is_configured": true, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 65536 00:17:29.826 }, 00:17:29.826 { 00:17:29.826 "name": "BaseBdev4", 00:17:29.826 "uuid": "6165e39e-516a-5bf6-b361-b19824e97b70", 00:17:29.826 "is_configured": true, 00:17:29.826 "data_offset": 0, 00:17:29.826 "data_size": 65536 00:17:29.826 } 00:17:29.826 ] 00:17:29.826 }' 00:17:29.826 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.826 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.086 [2024-11-04 11:49:55.552101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.086 [2024-11-04 11:49:55.552137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.086 [2024-11-04 11:49:55.552231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.086 [2024-11-04 11:49:55.552335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.086 [2024-11-04 11:49:55.552346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.086 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:30.346 /dev/nbd0 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.346 1+0 records in 00:17:30.346 1+0 records out 00:17:30.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609771 s, 6.7 MB/s 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:30.346 11:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:30.605 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.605 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.605 11:49:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:30.606 /dev/nbd1 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.606 1+0 records in 00:17:30.606 1+0 records out 00:17:30.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044355 s, 9.2 MB/s 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:30.606 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.865 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.158 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.158 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.158 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.159 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.417 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84839 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84839 ']' 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84839 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84839 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84839' 00:17:31.418 killing process with pid 84839 00:17:31.418 Received shutdown signal, test time was about 60.000000 seconds 00:17:31.418 00:17:31.418 Latency(us) 00:17:31.418 [2024-11-04T11:49:56.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.418 [2024-11-04T11:49:56.940Z] =================================================================================================================== 00:17:31.418 [2024-11-04T11:49:56.940Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84839 00:17:31.418 [2024-11-04 11:49:56.785131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.418 11:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84839 00:17:31.985 [2024-11-04 11:49:57.275347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:32.927 00:17:32.927 real 0m20.006s 00:17:32.927 user 0m23.917s 00:17:32.927 sys 0m2.144s 00:17:32.927 ************************************ 00:17:32.927 END TEST raid5f_rebuild_test 00:17:32.927 ************************************ 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.927 11:49:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:32.927 11:49:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:32.927 11:49:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:32.927 11:49:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.927 ************************************ 00:17:32.927 START TEST raid5f_rebuild_test_sb 00:17:32.927 ************************************ 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:32.927 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85361 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85361 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85361 ']' 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.186 11:49:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.186 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:33.186 Zero copy mechanism will not be used. 00:17:33.186 [2024-11-04 11:49:58.532272] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:17:33.186 [2024-11-04 11:49:58.532392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85361 ] 00:17:33.444 [2024-11-04 11:49:58.707894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.444 [2024-11-04 11:49:58.822502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.704 [2024-11-04 11:49:59.022731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.704 [2024-11-04 11:49:59.022790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.963 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 BaseBdev1_malloc 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 [2024-11-04 11:49:59.399713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:33.964 [2024-11-04 11:49:59.399784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.964 [2024-11-04 11:49:59.399809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.964 [2024-11-04 11:49:59.399820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.964 [2024-11-04 11:49:59.402170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.964 [2024-11-04 11:49:59.402281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:33.964 BaseBdev1 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 BaseBdev2_malloc 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 [2024-11-04 11:49:59.457655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:33.964 [2024-11-04 11:49:59.457714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.964 [2024-11-04 11:49:59.457734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.964 [2024-11-04 11:49:59.457747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.964 [2024-11-04 11:49:59.459979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.964 [2024-11-04 11:49:59.460015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:33.964 BaseBdev2 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 BaseBdev3_malloc 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 [2024-11-04 11:49:59.523684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:34.233 [2024-11-04 11:49:59.523802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.233 [2024-11-04 11:49:59.523827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:34.233 [2024-11-04 11:49:59.523838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.233 [2024-11-04 11:49:59.526006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.233 [2024-11-04 11:49:59.526040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:34.233 BaseBdev3 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 BaseBdev4_malloc 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 [2024-11-04 11:49:59.579028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:34.233 [2024-11-04 11:49:59.579085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.233 [2024-11-04 11:49:59.579121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:34.233 [2024-11-04 11:49:59.579131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.233 [2024-11-04 11:49:59.581358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.233 [2024-11-04 11:49:59.581409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:34.233 BaseBdev4 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 spare_malloc 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 spare_delay 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 [2024-11-04 11:49:59.647968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.233 [2024-11-04 11:49:59.648031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.233 [2024-11-04 11:49:59.648057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:34.233 [2024-11-04 11:49:59.648083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.234 [2024-11-04 11:49:59.650226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.234 [2024-11-04 11:49:59.650267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.234 spare 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.234 [2024-11-04 11:49:59.659996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.234 [2024-11-04 11:49:59.661853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.234 [2024-11-04 11:49:59.661914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.234 [2024-11-04 11:49:59.661964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.234 [2024-11-04 11:49:59.662147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:34.234 [2024-11-04 11:49:59.662163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:34.234 [2024-11-04 11:49:59.662388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:34.234 [2024-11-04 11:49:59.669591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:34.234 [2024-11-04 11:49:59.669610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:34.234 [2024-11-04 11:49:59.669798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.234 "name": "raid_bdev1", 00:17:34.234 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:34.234 "strip_size_kb": 64, 00:17:34.234 "state": "online", 00:17:34.234 "raid_level": "raid5f", 00:17:34.234 "superblock": true, 00:17:34.234 "num_base_bdevs": 4, 00:17:34.234 "num_base_bdevs_discovered": 4, 00:17:34.234 "num_base_bdevs_operational": 4, 00:17:34.234 "base_bdevs_list": [ 00:17:34.234 { 00:17:34.234 "name": "BaseBdev1", 00:17:34.234 "uuid": "ad8d7639-bbbc-5f3e-970a-eeea5db2a132", 00:17:34.234 "is_configured": true, 00:17:34.234 "data_offset": 2048, 00:17:34.234 "data_size": 63488 00:17:34.234 }, 00:17:34.234 { 00:17:34.234 "name": "BaseBdev2", 00:17:34.234 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:34.234 "is_configured": true, 00:17:34.234 "data_offset": 2048, 00:17:34.234 "data_size": 63488 00:17:34.234 }, 00:17:34.234 { 00:17:34.234 "name": "BaseBdev3", 00:17:34.234 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:34.234 "is_configured": true, 00:17:34.234 "data_offset": 2048, 00:17:34.234 "data_size": 63488 00:17:34.234 }, 00:17:34.234 { 00:17:34.234 "name": "BaseBdev4", 00:17:34.234 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:34.234 "is_configured": true, 00:17:34.234 "data_offset": 2048, 00:17:34.234 "data_size": 63488 00:17:34.234 } 00:17:34.234 ] 00:17:34.234 }' 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.234 11:49:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:34.806 [2024-11-04 11:50:00.105837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.806 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:35.065 [2024-11-04 11:50:00.405173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:35.066 /dev/nbd0 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.066 1+0 records in 00:17:35.066 1+0 records out 00:17:35.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377897 s, 10.8 MB/s 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:35.066 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:35.634 496+0 records in 00:17:35.634 496+0 records out 00:17:35.634 97517568 bytes (98 MB, 93 MiB) copied, 0.479962 s, 203 MB/s 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.634 11:50:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.893 [2024-11-04 11:50:01.186190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.893 [2024-11-04 11:50:01.202555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.893 "name": "raid_bdev1", 00:17:35.893 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:35.893 "strip_size_kb": 64, 00:17:35.893 "state": "online", 00:17:35.893 "raid_level": "raid5f", 00:17:35.893 "superblock": true, 00:17:35.893 "num_base_bdevs": 4, 00:17:35.893 "num_base_bdevs_discovered": 3, 00:17:35.893 "num_base_bdevs_operational": 3, 00:17:35.893 "base_bdevs_list": [ 00:17:35.893 { 00:17:35.893 "name": null, 00:17:35.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.893 "is_configured": false, 00:17:35.893 "data_offset": 0, 00:17:35.893 "data_size": 63488 00:17:35.893 }, 00:17:35.893 { 00:17:35.893 "name": "BaseBdev2", 00:17:35.893 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:35.893 "is_configured": true, 00:17:35.893 "data_offset": 2048, 00:17:35.893 "data_size": 63488 00:17:35.893 }, 00:17:35.893 { 00:17:35.893 "name": "BaseBdev3", 00:17:35.893 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:35.893 "is_configured": true, 00:17:35.893 "data_offset": 2048, 00:17:35.893 "data_size": 63488 00:17:35.893 }, 00:17:35.893 { 00:17:35.893 "name": "BaseBdev4", 00:17:35.893 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:35.893 "is_configured": true, 00:17:35.893 "data_offset": 2048, 00:17:35.893 "data_size": 63488 00:17:35.893 } 00:17:35.893 ] 00:17:35.893 }' 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.893 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.152 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.152 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.152 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.152 [2024-11-04 11:50:01.665818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.411 [2024-11-04 11:50:01.684646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:36.411 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.411 11:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.411 [2024-11-04 11:50:01.696483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.349 "name": "raid_bdev1", 00:17:37.349 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:37.349 "strip_size_kb": 64, 00:17:37.349 "state": "online", 00:17:37.349 "raid_level": "raid5f", 00:17:37.349 "superblock": true, 00:17:37.349 "num_base_bdevs": 4, 00:17:37.349 "num_base_bdevs_discovered": 4, 00:17:37.349 "num_base_bdevs_operational": 4, 00:17:37.349 "process": { 00:17:37.349 "type": "rebuild", 00:17:37.349 "target": "spare", 00:17:37.349 "progress": { 00:17:37.349 "blocks": 17280, 00:17:37.349 "percent": 9 00:17:37.349 } 00:17:37.349 }, 00:17:37.349 "base_bdevs_list": [ 00:17:37.349 { 00:17:37.349 "name": "spare", 00:17:37.349 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 }, 00:17:37.349 { 00:17:37.349 "name": "BaseBdev2", 00:17:37.349 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 }, 00:17:37.349 { 00:17:37.349 "name": "BaseBdev3", 00:17:37.349 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 }, 00:17:37.349 { 00:17:37.349 "name": "BaseBdev4", 00:17:37.349 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:37.349 "is_configured": true, 00:17:37.349 "data_offset": 2048, 00:17:37.349 "data_size": 63488 00:17:37.349 } 00:17:37.349 ] 00:17:37.349 }' 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.349 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.349 [2024-11-04 11:50:02.851646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.609 [2024-11-04 11:50:02.905737] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.609 [2024-11-04 11:50:02.905814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.609 [2024-11-04 11:50:02.905832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.609 [2024-11-04 11:50:02.905841] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.609 "name": "raid_bdev1", 00:17:37.609 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:37.609 "strip_size_kb": 64, 00:17:37.609 "state": "online", 00:17:37.609 "raid_level": "raid5f", 00:17:37.609 "superblock": true, 00:17:37.609 "num_base_bdevs": 4, 00:17:37.609 "num_base_bdevs_discovered": 3, 00:17:37.609 "num_base_bdevs_operational": 3, 00:17:37.609 "base_bdevs_list": [ 00:17:37.609 { 00:17:37.609 "name": null, 00:17:37.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.609 "is_configured": false, 00:17:37.609 "data_offset": 0, 00:17:37.609 "data_size": 63488 00:17:37.609 }, 00:17:37.609 { 00:17:37.609 "name": "BaseBdev2", 00:17:37.609 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:37.609 "is_configured": true, 00:17:37.609 "data_offset": 2048, 00:17:37.609 "data_size": 63488 00:17:37.609 }, 00:17:37.609 { 00:17:37.609 "name": "BaseBdev3", 00:17:37.609 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:37.609 "is_configured": true, 00:17:37.609 "data_offset": 2048, 00:17:37.609 "data_size": 63488 00:17:37.609 }, 00:17:37.609 { 00:17:37.609 "name": "BaseBdev4", 00:17:37.609 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:37.609 "is_configured": true, 00:17:37.609 "data_offset": 2048, 00:17:37.609 "data_size": 63488 00:17:37.609 } 00:17:37.609 ] 00:17:37.609 }' 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.609 11:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.869 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.128 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.128 "name": "raid_bdev1", 00:17:38.128 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:38.128 "strip_size_kb": 64, 00:17:38.129 "state": "online", 00:17:38.129 "raid_level": "raid5f", 00:17:38.129 "superblock": true, 00:17:38.129 "num_base_bdevs": 4, 00:17:38.129 "num_base_bdevs_discovered": 3, 00:17:38.129 "num_base_bdevs_operational": 3, 00:17:38.129 "base_bdevs_list": [ 00:17:38.129 { 00:17:38.129 "name": null, 00:17:38.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.129 "is_configured": false, 00:17:38.129 "data_offset": 0, 00:17:38.129 "data_size": 63488 00:17:38.129 }, 00:17:38.129 { 00:17:38.129 "name": "BaseBdev2", 00:17:38.129 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:38.129 "is_configured": true, 00:17:38.129 "data_offset": 2048, 00:17:38.129 "data_size": 63488 00:17:38.129 }, 00:17:38.129 { 00:17:38.129 "name": "BaseBdev3", 00:17:38.129 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:38.129 "is_configured": true, 00:17:38.129 "data_offset": 2048, 00:17:38.129 "data_size": 63488 00:17:38.129 }, 00:17:38.129 { 00:17:38.129 "name": "BaseBdev4", 00:17:38.129 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:38.129 "is_configured": true, 00:17:38.129 "data_offset": 2048, 00:17:38.129 "data_size": 63488 00:17:38.129 } 00:17:38.129 ] 00:17:38.129 }' 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.129 [2024-11-04 11:50:03.518331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.129 [2024-11-04 11:50:03.534282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.129 11:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.129 [2024-11-04 11:50:03.544596] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.063 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.323 "name": "raid_bdev1", 00:17:39.323 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:39.323 "strip_size_kb": 64, 00:17:39.323 "state": "online", 00:17:39.323 "raid_level": "raid5f", 00:17:39.323 "superblock": true, 00:17:39.323 "num_base_bdevs": 4, 00:17:39.323 "num_base_bdevs_discovered": 4, 00:17:39.323 "num_base_bdevs_operational": 4, 00:17:39.323 "process": { 00:17:39.323 "type": "rebuild", 00:17:39.323 "target": "spare", 00:17:39.323 "progress": { 00:17:39.323 "blocks": 19200, 00:17:39.323 "percent": 10 00:17:39.323 } 00:17:39.323 }, 00:17:39.323 "base_bdevs_list": [ 00:17:39.323 { 00:17:39.323 "name": "spare", 00:17:39.323 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev2", 00:17:39.323 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev3", 00:17:39.323 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev4", 00:17:39.323 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 } 00:17:39.323 ] 00:17:39.323 }' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:39.323 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=646 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.323 "name": "raid_bdev1", 00:17:39.323 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:39.323 "strip_size_kb": 64, 00:17:39.323 "state": "online", 00:17:39.323 "raid_level": "raid5f", 00:17:39.323 "superblock": true, 00:17:39.323 "num_base_bdevs": 4, 00:17:39.323 "num_base_bdevs_discovered": 4, 00:17:39.323 "num_base_bdevs_operational": 4, 00:17:39.323 "process": { 00:17:39.323 "type": "rebuild", 00:17:39.323 "target": "spare", 00:17:39.323 "progress": { 00:17:39.323 "blocks": 21120, 00:17:39.323 "percent": 11 00:17:39.323 } 00:17:39.323 }, 00:17:39.323 "base_bdevs_list": [ 00:17:39.323 { 00:17:39.323 "name": "spare", 00:17:39.323 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev2", 00:17:39.323 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev3", 00:17:39.323 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 }, 00:17:39.323 { 00:17:39.323 "name": "BaseBdev4", 00:17:39.323 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:39.323 "is_configured": true, 00:17:39.323 "data_offset": 2048, 00:17:39.323 "data_size": 63488 00:17:39.323 } 00:17:39.323 ] 00:17:39.323 }' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.323 11:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.700 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.701 "name": "raid_bdev1", 00:17:40.701 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:40.701 "strip_size_kb": 64, 00:17:40.701 "state": "online", 00:17:40.701 "raid_level": "raid5f", 00:17:40.701 "superblock": true, 00:17:40.701 "num_base_bdevs": 4, 00:17:40.701 "num_base_bdevs_discovered": 4, 00:17:40.701 "num_base_bdevs_operational": 4, 00:17:40.701 "process": { 00:17:40.701 "type": "rebuild", 00:17:40.701 "target": "spare", 00:17:40.701 "progress": { 00:17:40.701 "blocks": 42240, 00:17:40.701 "percent": 22 00:17:40.701 } 00:17:40.701 }, 00:17:40.701 "base_bdevs_list": [ 00:17:40.701 { 00:17:40.701 "name": "spare", 00:17:40.701 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:40.701 "is_configured": true, 00:17:40.701 "data_offset": 2048, 00:17:40.701 "data_size": 63488 00:17:40.701 }, 00:17:40.701 { 00:17:40.701 "name": "BaseBdev2", 00:17:40.701 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:40.701 "is_configured": true, 00:17:40.701 "data_offset": 2048, 00:17:40.701 "data_size": 63488 00:17:40.701 }, 00:17:40.701 { 00:17:40.701 "name": "BaseBdev3", 00:17:40.701 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:40.701 "is_configured": true, 00:17:40.701 "data_offset": 2048, 00:17:40.701 "data_size": 63488 00:17:40.701 }, 00:17:40.701 { 00:17:40.701 "name": "BaseBdev4", 00:17:40.701 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:40.701 "is_configured": true, 00:17:40.701 "data_offset": 2048, 00:17:40.701 "data_size": 63488 00:17:40.701 } 00:17:40.701 ] 00:17:40.701 }' 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.701 11:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.643 11:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.643 "name": "raid_bdev1", 00:17:41.643 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:41.643 "strip_size_kb": 64, 00:17:41.643 "state": "online", 00:17:41.643 "raid_level": "raid5f", 00:17:41.643 "superblock": true, 00:17:41.643 "num_base_bdevs": 4, 00:17:41.643 "num_base_bdevs_discovered": 4, 00:17:41.643 "num_base_bdevs_operational": 4, 00:17:41.643 "process": { 00:17:41.643 "type": "rebuild", 00:17:41.643 "target": "spare", 00:17:41.643 "progress": { 00:17:41.643 "blocks": 65280, 00:17:41.643 "percent": 34 00:17:41.643 } 00:17:41.643 }, 00:17:41.643 "base_bdevs_list": [ 00:17:41.643 { 00:17:41.643 "name": "spare", 00:17:41.643 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:41.643 "is_configured": true, 00:17:41.643 "data_offset": 2048, 00:17:41.643 "data_size": 63488 00:17:41.643 }, 00:17:41.643 { 00:17:41.643 "name": "BaseBdev2", 00:17:41.643 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:41.643 "is_configured": true, 00:17:41.643 "data_offset": 2048, 00:17:41.643 "data_size": 63488 00:17:41.643 }, 00:17:41.643 { 00:17:41.643 "name": "BaseBdev3", 00:17:41.643 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:41.643 "is_configured": true, 00:17:41.643 "data_offset": 2048, 00:17:41.643 "data_size": 63488 00:17:41.643 }, 00:17:41.643 { 00:17:41.643 "name": "BaseBdev4", 00:17:41.643 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:41.643 "is_configured": true, 00:17:41.643 "data_offset": 2048, 00:17:41.643 "data_size": 63488 00:17:41.643 } 00:17:41.643 ] 00:17:41.643 }' 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.643 11:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.018 "name": "raid_bdev1", 00:17:43.018 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:43.018 "strip_size_kb": 64, 00:17:43.018 "state": "online", 00:17:43.018 "raid_level": "raid5f", 00:17:43.018 "superblock": true, 00:17:43.018 "num_base_bdevs": 4, 00:17:43.018 "num_base_bdevs_discovered": 4, 00:17:43.018 "num_base_bdevs_operational": 4, 00:17:43.018 "process": { 00:17:43.018 "type": "rebuild", 00:17:43.018 "target": "spare", 00:17:43.018 "progress": { 00:17:43.018 "blocks": 86400, 00:17:43.018 "percent": 45 00:17:43.018 } 00:17:43.018 }, 00:17:43.018 "base_bdevs_list": [ 00:17:43.018 { 00:17:43.018 "name": "spare", 00:17:43.018 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:43.018 "is_configured": true, 00:17:43.018 "data_offset": 2048, 00:17:43.018 "data_size": 63488 00:17:43.018 }, 00:17:43.018 { 00:17:43.018 "name": "BaseBdev2", 00:17:43.018 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:43.018 "is_configured": true, 00:17:43.018 "data_offset": 2048, 00:17:43.018 "data_size": 63488 00:17:43.018 }, 00:17:43.018 { 00:17:43.018 "name": "BaseBdev3", 00:17:43.018 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:43.018 "is_configured": true, 00:17:43.018 "data_offset": 2048, 00:17:43.018 "data_size": 63488 00:17:43.018 }, 00:17:43.018 { 00:17:43.018 "name": "BaseBdev4", 00:17:43.018 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:43.018 "is_configured": true, 00:17:43.018 "data_offset": 2048, 00:17:43.018 "data_size": 63488 00:17:43.018 } 00:17:43.018 ] 00:17:43.018 }' 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.018 11:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.954 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.954 "name": "raid_bdev1", 00:17:43.954 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:43.954 "strip_size_kb": 64, 00:17:43.954 "state": "online", 00:17:43.954 "raid_level": "raid5f", 00:17:43.954 "superblock": true, 00:17:43.954 "num_base_bdevs": 4, 00:17:43.954 "num_base_bdevs_discovered": 4, 00:17:43.954 "num_base_bdevs_operational": 4, 00:17:43.954 "process": { 00:17:43.955 "type": "rebuild", 00:17:43.955 "target": "spare", 00:17:43.955 "progress": { 00:17:43.955 "blocks": 109440, 00:17:43.955 "percent": 57 00:17:43.955 } 00:17:43.955 }, 00:17:43.955 "base_bdevs_list": [ 00:17:43.955 { 00:17:43.955 "name": "spare", 00:17:43.955 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:43.955 "is_configured": true, 00:17:43.955 "data_offset": 2048, 00:17:43.955 "data_size": 63488 00:17:43.955 }, 00:17:43.955 { 00:17:43.955 "name": "BaseBdev2", 00:17:43.955 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:43.955 "is_configured": true, 00:17:43.955 "data_offset": 2048, 00:17:43.955 "data_size": 63488 00:17:43.955 }, 00:17:43.955 { 00:17:43.955 "name": "BaseBdev3", 00:17:43.955 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:43.955 "is_configured": true, 00:17:43.955 "data_offset": 2048, 00:17:43.955 "data_size": 63488 00:17:43.955 }, 00:17:43.955 { 00:17:43.955 "name": "BaseBdev4", 00:17:43.955 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:43.955 "is_configured": true, 00:17:43.955 "data_offset": 2048, 00:17:43.955 "data_size": 63488 00:17:43.955 } 00:17:43.955 ] 00:17:43.955 }' 00:17:43.955 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.955 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.955 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.955 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.955 11:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.340 "name": "raid_bdev1", 00:17:45.340 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:45.340 "strip_size_kb": 64, 00:17:45.340 "state": "online", 00:17:45.340 "raid_level": "raid5f", 00:17:45.340 "superblock": true, 00:17:45.340 "num_base_bdevs": 4, 00:17:45.340 "num_base_bdevs_discovered": 4, 00:17:45.340 "num_base_bdevs_operational": 4, 00:17:45.340 "process": { 00:17:45.340 "type": "rebuild", 00:17:45.340 "target": "spare", 00:17:45.340 "progress": { 00:17:45.340 "blocks": 130560, 00:17:45.340 "percent": 68 00:17:45.340 } 00:17:45.340 }, 00:17:45.340 "base_bdevs_list": [ 00:17:45.340 { 00:17:45.340 "name": "spare", 00:17:45.340 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:45.340 "is_configured": true, 00:17:45.340 "data_offset": 2048, 00:17:45.340 "data_size": 63488 00:17:45.340 }, 00:17:45.340 { 00:17:45.340 "name": "BaseBdev2", 00:17:45.340 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:45.340 "is_configured": true, 00:17:45.340 "data_offset": 2048, 00:17:45.340 "data_size": 63488 00:17:45.340 }, 00:17:45.340 { 00:17:45.340 "name": "BaseBdev3", 00:17:45.340 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:45.340 "is_configured": true, 00:17:45.340 "data_offset": 2048, 00:17:45.340 "data_size": 63488 00:17:45.340 }, 00:17:45.340 { 00:17:45.340 "name": "BaseBdev4", 00:17:45.340 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:45.340 "is_configured": true, 00:17:45.340 "data_offset": 2048, 00:17:45.340 "data_size": 63488 00:17:45.340 } 00:17:45.340 ] 00:17:45.340 }' 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.340 11:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.300 "name": "raid_bdev1", 00:17:46.300 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:46.300 "strip_size_kb": 64, 00:17:46.300 "state": "online", 00:17:46.300 "raid_level": "raid5f", 00:17:46.300 "superblock": true, 00:17:46.300 "num_base_bdevs": 4, 00:17:46.300 "num_base_bdevs_discovered": 4, 00:17:46.300 "num_base_bdevs_operational": 4, 00:17:46.300 "process": { 00:17:46.300 "type": "rebuild", 00:17:46.300 "target": "spare", 00:17:46.300 "progress": { 00:17:46.300 "blocks": 153600, 00:17:46.300 "percent": 80 00:17:46.300 } 00:17:46.300 }, 00:17:46.300 "base_bdevs_list": [ 00:17:46.300 { 00:17:46.300 "name": "spare", 00:17:46.300 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:46.300 "is_configured": true, 00:17:46.300 "data_offset": 2048, 00:17:46.300 "data_size": 63488 00:17:46.300 }, 00:17:46.300 { 00:17:46.300 "name": "BaseBdev2", 00:17:46.300 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:46.300 "is_configured": true, 00:17:46.300 "data_offset": 2048, 00:17:46.300 "data_size": 63488 00:17:46.300 }, 00:17:46.300 { 00:17:46.300 "name": "BaseBdev3", 00:17:46.300 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:46.300 "is_configured": true, 00:17:46.300 "data_offset": 2048, 00:17:46.300 "data_size": 63488 00:17:46.300 }, 00:17:46.300 { 00:17:46.300 "name": "BaseBdev4", 00:17:46.300 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:46.300 "is_configured": true, 00:17:46.300 "data_offset": 2048, 00:17:46.300 "data_size": 63488 00:17:46.300 } 00:17:46.300 ] 00:17:46.300 }' 00:17:46.300 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.301 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.301 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.301 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.301 11:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.235 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.494 "name": "raid_bdev1", 00:17:47.494 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:47.494 "strip_size_kb": 64, 00:17:47.494 "state": "online", 00:17:47.494 "raid_level": "raid5f", 00:17:47.494 "superblock": true, 00:17:47.494 "num_base_bdevs": 4, 00:17:47.494 "num_base_bdevs_discovered": 4, 00:17:47.494 "num_base_bdevs_operational": 4, 00:17:47.494 "process": { 00:17:47.494 "type": "rebuild", 00:17:47.494 "target": "spare", 00:17:47.494 "progress": { 00:17:47.494 "blocks": 174720, 00:17:47.494 "percent": 91 00:17:47.494 } 00:17:47.494 }, 00:17:47.494 "base_bdevs_list": [ 00:17:47.494 { 00:17:47.494 "name": "spare", 00:17:47.494 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:47.494 "is_configured": true, 00:17:47.494 "data_offset": 2048, 00:17:47.494 "data_size": 63488 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "name": "BaseBdev2", 00:17:47.494 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:47.494 "is_configured": true, 00:17:47.494 "data_offset": 2048, 00:17:47.494 "data_size": 63488 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "name": "BaseBdev3", 00:17:47.494 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:47.494 "is_configured": true, 00:17:47.494 "data_offset": 2048, 00:17:47.494 "data_size": 63488 00:17:47.494 }, 00:17:47.494 { 00:17:47.494 "name": "BaseBdev4", 00:17:47.494 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:47.494 "is_configured": true, 00:17:47.494 "data_offset": 2048, 00:17:47.494 "data_size": 63488 00:17:47.494 } 00:17:47.494 ] 00:17:47.494 }' 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.494 11:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.431 [2024-11-04 11:50:13.612886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:48.431 [2024-11-04 11:50:13.613055] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:48.431 [2024-11-04 11:50:13.613257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.431 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.432 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.432 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.432 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.432 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.432 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.690 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.690 "name": "raid_bdev1", 00:17:48.690 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:48.690 "strip_size_kb": 64, 00:17:48.690 "state": "online", 00:17:48.690 "raid_level": "raid5f", 00:17:48.690 "superblock": true, 00:17:48.690 "num_base_bdevs": 4, 00:17:48.690 "num_base_bdevs_discovered": 4, 00:17:48.690 "num_base_bdevs_operational": 4, 00:17:48.690 "base_bdevs_list": [ 00:17:48.690 { 00:17:48.690 "name": "spare", 00:17:48.690 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:48.690 "is_configured": true, 00:17:48.690 "data_offset": 2048, 00:17:48.690 "data_size": 63488 00:17:48.690 }, 00:17:48.690 { 00:17:48.690 "name": "BaseBdev2", 00:17:48.690 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:48.690 "is_configured": true, 00:17:48.690 "data_offset": 2048, 00:17:48.690 "data_size": 63488 00:17:48.690 }, 00:17:48.690 { 00:17:48.690 "name": "BaseBdev3", 00:17:48.690 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:48.690 "is_configured": true, 00:17:48.690 "data_offset": 2048, 00:17:48.690 "data_size": 63488 00:17:48.690 }, 00:17:48.690 { 00:17:48.690 "name": "BaseBdev4", 00:17:48.690 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:48.690 "is_configured": true, 00:17:48.690 "data_offset": 2048, 00:17:48.690 "data_size": 63488 00:17:48.690 } 00:17:48.690 ] 00:17:48.690 }' 00:17:48.690 11:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.690 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:48.690 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.690 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.691 "name": "raid_bdev1", 00:17:48.691 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:48.691 "strip_size_kb": 64, 00:17:48.691 "state": "online", 00:17:48.691 "raid_level": "raid5f", 00:17:48.691 "superblock": true, 00:17:48.691 "num_base_bdevs": 4, 00:17:48.691 "num_base_bdevs_discovered": 4, 00:17:48.691 "num_base_bdevs_operational": 4, 00:17:48.691 "base_bdevs_list": [ 00:17:48.691 { 00:17:48.691 "name": "spare", 00:17:48.691 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:48.691 "is_configured": true, 00:17:48.691 "data_offset": 2048, 00:17:48.691 "data_size": 63488 00:17:48.691 }, 00:17:48.691 { 00:17:48.691 "name": "BaseBdev2", 00:17:48.691 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:48.691 "is_configured": true, 00:17:48.691 "data_offset": 2048, 00:17:48.691 "data_size": 63488 00:17:48.691 }, 00:17:48.691 { 00:17:48.691 "name": "BaseBdev3", 00:17:48.691 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:48.691 "is_configured": true, 00:17:48.691 "data_offset": 2048, 00:17:48.691 "data_size": 63488 00:17:48.691 }, 00:17:48.691 { 00:17:48.691 "name": "BaseBdev4", 00:17:48.691 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:48.691 "is_configured": true, 00:17:48.691 "data_offset": 2048, 00:17:48.691 "data_size": 63488 00:17:48.691 } 00:17:48.691 ] 00:17:48.691 }' 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.691 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.949 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.949 "name": "raid_bdev1", 00:17:48.950 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:48.950 "strip_size_kb": 64, 00:17:48.950 "state": "online", 00:17:48.950 "raid_level": "raid5f", 00:17:48.950 "superblock": true, 00:17:48.950 "num_base_bdevs": 4, 00:17:48.950 "num_base_bdevs_discovered": 4, 00:17:48.950 "num_base_bdevs_operational": 4, 00:17:48.950 "base_bdevs_list": [ 00:17:48.950 { 00:17:48.950 "name": "spare", 00:17:48.950 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:48.950 "is_configured": true, 00:17:48.950 "data_offset": 2048, 00:17:48.950 "data_size": 63488 00:17:48.950 }, 00:17:48.950 { 00:17:48.950 "name": "BaseBdev2", 00:17:48.950 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:48.950 "is_configured": true, 00:17:48.950 "data_offset": 2048, 00:17:48.950 "data_size": 63488 00:17:48.950 }, 00:17:48.950 { 00:17:48.950 "name": "BaseBdev3", 00:17:48.950 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:48.950 "is_configured": true, 00:17:48.950 "data_offset": 2048, 00:17:48.950 "data_size": 63488 00:17:48.950 }, 00:17:48.950 { 00:17:48.950 "name": "BaseBdev4", 00:17:48.950 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:48.950 "is_configured": true, 00:17:48.950 "data_offset": 2048, 00:17:48.950 "data_size": 63488 00:17:48.950 } 00:17:48.950 ] 00:17:48.950 }' 00:17:48.950 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.950 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.209 [2024-11-04 11:50:14.584095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.209 [2024-11-04 11:50:14.584195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.209 [2024-11-04 11:50:14.584340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.209 [2024-11-04 11:50:14.584504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.209 [2024-11-04 11:50:14.584587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.209 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:49.467 /dev/nbd0 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.467 1+0 records in 00:17:49.467 1+0 records out 00:17:49.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413543 s, 9.9 MB/s 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.467 11:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:49.726 /dev/nbd1 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.726 1+0 records in 00:17:49.726 1+0 records out 00:17:49.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396175 s, 10.3 MB/s 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.726 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.985 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:50.277 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.537 [2024-11-04 11:50:15.817119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:50.537 [2024-11-04 11:50:15.817187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.537 [2024-11-04 11:50:15.817213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:50.537 [2024-11-04 11:50:15.817224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.537 [2024-11-04 11:50:15.819671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.537 [2024-11-04 11:50:15.819711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:50.537 [2024-11-04 11:50:15.819815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:50.537 [2024-11-04 11:50:15.819878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.537 [2024-11-04 11:50:15.820018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.537 [2024-11-04 11:50:15.820128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.537 [2024-11-04 11:50:15.820218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:50.537 spare 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.537 [2024-11-04 11:50:15.920177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:50.537 [2024-11-04 11:50:15.920244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:50.537 [2024-11-04 11:50:15.920599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:50.537 [2024-11-04 11:50:15.927755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:50.537 [2024-11-04 11:50:15.927779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:50.537 [2024-11-04 11:50:15.927994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.537 "name": "raid_bdev1", 00:17:50.537 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:50.537 "strip_size_kb": 64, 00:17:50.537 "state": "online", 00:17:50.537 "raid_level": "raid5f", 00:17:50.537 "superblock": true, 00:17:50.537 "num_base_bdevs": 4, 00:17:50.537 "num_base_bdevs_discovered": 4, 00:17:50.537 "num_base_bdevs_operational": 4, 00:17:50.537 "base_bdevs_list": [ 00:17:50.537 { 00:17:50.537 "name": "spare", 00:17:50.537 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:50.537 "is_configured": true, 00:17:50.537 "data_offset": 2048, 00:17:50.537 "data_size": 63488 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "name": "BaseBdev2", 00:17:50.537 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:50.537 "is_configured": true, 00:17:50.537 "data_offset": 2048, 00:17:50.537 "data_size": 63488 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "name": "BaseBdev3", 00:17:50.537 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:50.537 "is_configured": true, 00:17:50.537 "data_offset": 2048, 00:17:50.537 "data_size": 63488 00:17:50.537 }, 00:17:50.537 { 00:17:50.537 "name": "BaseBdev4", 00:17:50.537 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:50.537 "is_configured": true, 00:17:50.537 "data_offset": 2048, 00:17:50.537 "data_size": 63488 00:17:50.537 } 00:17:50.537 ] 00:17:50.537 }' 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.537 11:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.102 "name": "raid_bdev1", 00:17:51.102 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:51.102 "strip_size_kb": 64, 00:17:51.102 "state": "online", 00:17:51.102 "raid_level": "raid5f", 00:17:51.102 "superblock": true, 00:17:51.102 "num_base_bdevs": 4, 00:17:51.102 "num_base_bdevs_discovered": 4, 00:17:51.102 "num_base_bdevs_operational": 4, 00:17:51.102 "base_bdevs_list": [ 00:17:51.102 { 00:17:51.102 "name": "spare", 00:17:51.102 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:51.102 "is_configured": true, 00:17:51.102 "data_offset": 2048, 00:17:51.102 "data_size": 63488 00:17:51.102 }, 00:17:51.102 { 00:17:51.102 "name": "BaseBdev2", 00:17:51.102 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:51.102 "is_configured": true, 00:17:51.102 "data_offset": 2048, 00:17:51.102 "data_size": 63488 00:17:51.102 }, 00:17:51.102 { 00:17:51.102 "name": "BaseBdev3", 00:17:51.102 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:51.102 "is_configured": true, 00:17:51.102 "data_offset": 2048, 00:17:51.102 "data_size": 63488 00:17:51.102 }, 00:17:51.102 { 00:17:51.102 "name": "BaseBdev4", 00:17:51.102 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:51.102 "is_configured": true, 00:17:51.102 "data_offset": 2048, 00:17:51.102 "data_size": 63488 00:17:51.102 } 00:17:51.102 ] 00:17:51.102 }' 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.102 [2024-11-04 11:50:16.612173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.102 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.360 "name": "raid_bdev1", 00:17:51.360 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:51.360 "strip_size_kb": 64, 00:17:51.360 "state": "online", 00:17:51.360 "raid_level": "raid5f", 00:17:51.360 "superblock": true, 00:17:51.360 "num_base_bdevs": 4, 00:17:51.360 "num_base_bdevs_discovered": 3, 00:17:51.360 "num_base_bdevs_operational": 3, 00:17:51.360 "base_bdevs_list": [ 00:17:51.360 { 00:17:51.360 "name": null, 00:17:51.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.360 "is_configured": false, 00:17:51.360 "data_offset": 0, 00:17:51.360 "data_size": 63488 00:17:51.360 }, 00:17:51.360 { 00:17:51.360 "name": "BaseBdev2", 00:17:51.360 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:51.360 "is_configured": true, 00:17:51.360 "data_offset": 2048, 00:17:51.360 "data_size": 63488 00:17:51.360 }, 00:17:51.360 { 00:17:51.360 "name": "BaseBdev3", 00:17:51.360 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:51.360 "is_configured": true, 00:17:51.360 "data_offset": 2048, 00:17:51.360 "data_size": 63488 00:17:51.360 }, 00:17:51.360 { 00:17:51.360 "name": "BaseBdev4", 00:17:51.360 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:51.360 "is_configured": true, 00:17:51.360 "data_offset": 2048, 00:17:51.360 "data_size": 63488 00:17:51.360 } 00:17:51.360 ] 00:17:51.360 }' 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.360 11:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.618 11:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:51.618 11:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.618 11:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.618 [2024-11-04 11:50:17.055463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.618 [2024-11-04 11:50:17.055739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.618 [2024-11-04 11:50:17.055816] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.618 [2024-11-04 11:50:17.055906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.618 [2024-11-04 11:50:17.072586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:51.618 11:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.618 11:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:51.618 [2024-11-04 11:50:17.083583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.994 "name": "raid_bdev1", 00:17:52.994 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:52.994 "strip_size_kb": 64, 00:17:52.994 "state": "online", 00:17:52.994 "raid_level": "raid5f", 00:17:52.994 "superblock": true, 00:17:52.994 "num_base_bdevs": 4, 00:17:52.994 "num_base_bdevs_discovered": 4, 00:17:52.994 "num_base_bdevs_operational": 4, 00:17:52.994 "process": { 00:17:52.994 "type": "rebuild", 00:17:52.994 "target": "spare", 00:17:52.994 "progress": { 00:17:52.994 "blocks": 19200, 00:17:52.994 "percent": 10 00:17:52.994 } 00:17:52.994 }, 00:17:52.994 "base_bdevs_list": [ 00:17:52.994 { 00:17:52.994 "name": "spare", 00:17:52.994 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:52.994 "is_configured": true, 00:17:52.994 "data_offset": 2048, 00:17:52.994 "data_size": 63488 00:17:52.994 }, 00:17:52.994 { 00:17:52.994 "name": "BaseBdev2", 00:17:52.994 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:52.994 "is_configured": true, 00:17:52.994 "data_offset": 2048, 00:17:52.994 "data_size": 63488 00:17:52.994 }, 00:17:52.994 { 00:17:52.994 "name": "BaseBdev3", 00:17:52.994 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:52.994 "is_configured": true, 00:17:52.994 "data_offset": 2048, 00:17:52.994 "data_size": 63488 00:17:52.994 }, 00:17:52.994 { 00:17:52.994 "name": "BaseBdev4", 00:17:52.994 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:52.994 "is_configured": true, 00:17:52.994 "data_offset": 2048, 00:17:52.994 "data_size": 63488 00:17:52.994 } 00:17:52.994 ] 00:17:52.994 }' 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.994 [2024-11-04 11:50:18.234463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.994 [2024-11-04 11:50:18.291942] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:52.994 [2024-11-04 11:50:18.292147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.994 [2024-11-04 11:50:18.292189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:52.994 [2024-11-04 11:50:18.292214] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.994 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.995 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.995 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.995 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.995 "name": "raid_bdev1", 00:17:52.995 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:52.995 "strip_size_kb": 64, 00:17:52.995 "state": "online", 00:17:52.995 "raid_level": "raid5f", 00:17:52.995 "superblock": true, 00:17:52.995 "num_base_bdevs": 4, 00:17:52.995 "num_base_bdevs_discovered": 3, 00:17:52.995 "num_base_bdevs_operational": 3, 00:17:52.995 "base_bdevs_list": [ 00:17:52.995 { 00:17:52.995 "name": null, 00:17:52.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.995 "is_configured": false, 00:17:52.995 "data_offset": 0, 00:17:52.995 "data_size": 63488 00:17:52.995 }, 00:17:52.995 { 00:17:52.995 "name": "BaseBdev2", 00:17:52.995 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:52.995 "is_configured": true, 00:17:52.995 "data_offset": 2048, 00:17:52.995 "data_size": 63488 00:17:52.995 }, 00:17:52.995 { 00:17:52.995 "name": "BaseBdev3", 00:17:52.995 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:52.995 "is_configured": true, 00:17:52.995 "data_offset": 2048, 00:17:52.995 "data_size": 63488 00:17:52.995 }, 00:17:52.995 { 00:17:52.995 "name": "BaseBdev4", 00:17:52.995 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:52.995 "is_configured": true, 00:17:52.995 "data_offset": 2048, 00:17:52.995 "data_size": 63488 00:17:52.995 } 00:17:52.995 ] 00:17:52.995 }' 00:17:52.995 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.995 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.253 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.253 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.253 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.253 [2024-11-04 11:50:18.765284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.253 [2024-11-04 11:50:18.765427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.253 [2024-11-04 11:50:18.765523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:53.253 [2024-11-04 11:50:18.765568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.253 [2024-11-04 11:50:18.766178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.253 [2024-11-04 11:50:18.766260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.253 [2024-11-04 11:50:18.766426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.253 [2024-11-04 11:50:18.766490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.253 [2024-11-04 11:50:18.766553] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:53.253 [2024-11-04 11:50:18.766624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.512 [2024-11-04 11:50:18.782027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:53.512 spare 00:17:53.512 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.512 11:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:53.512 [2024-11-04 11:50:18.791311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.464 "name": "raid_bdev1", 00:17:54.464 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:54.464 "strip_size_kb": 64, 00:17:54.464 "state": "online", 00:17:54.464 "raid_level": "raid5f", 00:17:54.464 "superblock": true, 00:17:54.464 "num_base_bdevs": 4, 00:17:54.464 "num_base_bdevs_discovered": 4, 00:17:54.464 "num_base_bdevs_operational": 4, 00:17:54.464 "process": { 00:17:54.464 "type": "rebuild", 00:17:54.464 "target": "spare", 00:17:54.464 "progress": { 00:17:54.464 "blocks": 19200, 00:17:54.464 "percent": 10 00:17:54.464 } 00:17:54.464 }, 00:17:54.464 "base_bdevs_list": [ 00:17:54.464 { 00:17:54.464 "name": "spare", 00:17:54.464 "uuid": "9270e2fe-2eec-5119-97a2-d4c479a443fd", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 2048, 00:17:54.464 "data_size": 63488 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "name": "BaseBdev2", 00:17:54.464 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 2048, 00:17:54.464 "data_size": 63488 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "name": "BaseBdev3", 00:17:54.464 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 2048, 00:17:54.464 "data_size": 63488 00:17:54.464 }, 00:17:54.464 { 00:17:54.464 "name": "BaseBdev4", 00:17:54.464 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:54.464 "is_configured": true, 00:17:54.464 "data_offset": 2048, 00:17:54.464 "data_size": 63488 00:17:54.464 } 00:17:54.464 ] 00:17:54.464 }' 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.464 11:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.464 [2024-11-04 11:50:19.946427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.722 [2024-11-04 11:50:20.000194] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:54.722 [2024-11-04 11:50:20.000343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.722 [2024-11-04 11:50:20.000405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:54.722 [2024-11-04 11:50:20.000435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.722 "name": "raid_bdev1", 00:17:54.722 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:54.722 "strip_size_kb": 64, 00:17:54.722 "state": "online", 00:17:54.722 "raid_level": "raid5f", 00:17:54.722 "superblock": true, 00:17:54.722 "num_base_bdevs": 4, 00:17:54.722 "num_base_bdevs_discovered": 3, 00:17:54.722 "num_base_bdevs_operational": 3, 00:17:54.722 "base_bdevs_list": [ 00:17:54.722 { 00:17:54.722 "name": null, 00:17:54.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.722 "is_configured": false, 00:17:54.722 "data_offset": 0, 00:17:54.722 "data_size": 63488 00:17:54.722 }, 00:17:54.722 { 00:17:54.722 "name": "BaseBdev2", 00:17:54.722 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:54.722 "is_configured": true, 00:17:54.722 "data_offset": 2048, 00:17:54.722 "data_size": 63488 00:17:54.722 }, 00:17:54.722 { 00:17:54.722 "name": "BaseBdev3", 00:17:54.722 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:54.722 "is_configured": true, 00:17:54.722 "data_offset": 2048, 00:17:54.722 "data_size": 63488 00:17:54.722 }, 00:17:54.722 { 00:17:54.722 "name": "BaseBdev4", 00:17:54.722 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:54.722 "is_configured": true, 00:17:54.722 "data_offset": 2048, 00:17:54.722 "data_size": 63488 00:17:54.722 } 00:17:54.722 ] 00:17:54.722 }' 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.722 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.980 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.241 "name": "raid_bdev1", 00:17:55.241 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:55.241 "strip_size_kb": 64, 00:17:55.241 "state": "online", 00:17:55.241 "raid_level": "raid5f", 00:17:55.241 "superblock": true, 00:17:55.241 "num_base_bdevs": 4, 00:17:55.241 "num_base_bdevs_discovered": 3, 00:17:55.241 "num_base_bdevs_operational": 3, 00:17:55.241 "base_bdevs_list": [ 00:17:55.241 { 00:17:55.241 "name": null, 00:17:55.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.241 "is_configured": false, 00:17:55.241 "data_offset": 0, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev2", 00:17:55.241 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev3", 00:17:55.241 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 }, 00:17:55.241 { 00:17:55.241 "name": "BaseBdev4", 00:17:55.241 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:55.241 "is_configured": true, 00:17:55.241 "data_offset": 2048, 00:17:55.241 "data_size": 63488 00:17:55.241 } 00:17:55.241 ] 00:17:55.241 }' 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.241 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.241 [2024-11-04 11:50:20.640132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:55.241 [2024-11-04 11:50:20.640263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.241 [2024-11-04 11:50:20.640295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:55.241 [2024-11-04 11:50:20.640336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.241 [2024-11-04 11:50:20.640903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.242 [2024-11-04 11:50:20.640927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:55.242 [2024-11-04 11:50:20.641027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:55.242 [2024-11-04 11:50:20.641044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.242 [2024-11-04 11:50:20.641059] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.242 [2024-11-04 11:50:20.641071] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:55.242 BaseBdev1 00:17:55.242 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.242 11:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.175 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.433 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.433 "name": "raid_bdev1", 00:17:56.433 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:56.433 "strip_size_kb": 64, 00:17:56.433 "state": "online", 00:17:56.433 "raid_level": "raid5f", 00:17:56.433 "superblock": true, 00:17:56.433 "num_base_bdevs": 4, 00:17:56.433 "num_base_bdevs_discovered": 3, 00:17:56.433 "num_base_bdevs_operational": 3, 00:17:56.433 "base_bdevs_list": [ 00:17:56.433 { 00:17:56.433 "name": null, 00:17:56.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.433 "is_configured": false, 00:17:56.433 "data_offset": 0, 00:17:56.433 "data_size": 63488 00:17:56.433 }, 00:17:56.433 { 00:17:56.433 "name": "BaseBdev2", 00:17:56.433 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:56.433 "is_configured": true, 00:17:56.433 "data_offset": 2048, 00:17:56.433 "data_size": 63488 00:17:56.433 }, 00:17:56.433 { 00:17:56.433 "name": "BaseBdev3", 00:17:56.433 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:56.433 "is_configured": true, 00:17:56.433 "data_offset": 2048, 00:17:56.433 "data_size": 63488 00:17:56.433 }, 00:17:56.433 { 00:17:56.433 "name": "BaseBdev4", 00:17:56.433 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:56.433 "is_configured": true, 00:17:56.433 "data_offset": 2048, 00:17:56.433 "data_size": 63488 00:17:56.433 } 00:17:56.433 ] 00:17:56.433 }' 00:17:56.433 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.433 11:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.691 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.691 "name": "raid_bdev1", 00:17:56.692 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:56.692 "strip_size_kb": 64, 00:17:56.692 "state": "online", 00:17:56.692 "raid_level": "raid5f", 00:17:56.692 "superblock": true, 00:17:56.692 "num_base_bdevs": 4, 00:17:56.692 "num_base_bdevs_discovered": 3, 00:17:56.692 "num_base_bdevs_operational": 3, 00:17:56.692 "base_bdevs_list": [ 00:17:56.692 { 00:17:56.692 "name": null, 00:17:56.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.692 "is_configured": false, 00:17:56.692 "data_offset": 0, 00:17:56.692 "data_size": 63488 00:17:56.692 }, 00:17:56.692 { 00:17:56.692 "name": "BaseBdev2", 00:17:56.692 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:56.692 "is_configured": true, 00:17:56.692 "data_offset": 2048, 00:17:56.692 "data_size": 63488 00:17:56.692 }, 00:17:56.692 { 00:17:56.692 "name": "BaseBdev3", 00:17:56.692 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:56.692 "is_configured": true, 00:17:56.692 "data_offset": 2048, 00:17:56.692 "data_size": 63488 00:17:56.692 }, 00:17:56.692 { 00:17:56.692 "name": "BaseBdev4", 00:17:56.692 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:56.692 "is_configured": true, 00:17:56.692 "data_offset": 2048, 00:17:56.692 "data_size": 63488 00:17:56.692 } 00:17:56.692 ] 00:17:56.692 }' 00:17:56.692 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.692 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.692 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.950 [2024-11-04 11:50:22.261448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.950 [2024-11-04 11:50:22.261691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:56.950 [2024-11-04 11:50:22.261716] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:56.950 request: 00:17:56.950 { 00:17:56.950 "base_bdev": "BaseBdev1", 00:17:56.950 "raid_bdev": "raid_bdev1", 00:17:56.950 "method": "bdev_raid_add_base_bdev", 00:17:56.950 "req_id": 1 00:17:56.950 } 00:17:56.950 Got JSON-RPC error response 00:17:56.950 response: 00:17:56.950 { 00:17:56.950 "code": -22, 00:17:56.950 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:56.950 } 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:56.950 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.951 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.951 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.951 11:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:57.884 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.884 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.885 "name": "raid_bdev1", 00:17:57.885 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:57.885 "strip_size_kb": 64, 00:17:57.885 "state": "online", 00:17:57.885 "raid_level": "raid5f", 00:17:57.885 "superblock": true, 00:17:57.885 "num_base_bdevs": 4, 00:17:57.885 "num_base_bdevs_discovered": 3, 00:17:57.885 "num_base_bdevs_operational": 3, 00:17:57.885 "base_bdevs_list": [ 00:17:57.885 { 00:17:57.885 "name": null, 00:17:57.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.885 "is_configured": false, 00:17:57.885 "data_offset": 0, 00:17:57.885 "data_size": 63488 00:17:57.885 }, 00:17:57.885 { 00:17:57.885 "name": "BaseBdev2", 00:17:57.885 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:57.885 "is_configured": true, 00:17:57.885 "data_offset": 2048, 00:17:57.885 "data_size": 63488 00:17:57.885 }, 00:17:57.885 { 00:17:57.885 "name": "BaseBdev3", 00:17:57.885 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:57.885 "is_configured": true, 00:17:57.885 "data_offset": 2048, 00:17:57.885 "data_size": 63488 00:17:57.885 }, 00:17:57.885 { 00:17:57.885 "name": "BaseBdev4", 00:17:57.885 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:57.885 "is_configured": true, 00:17:57.885 "data_offset": 2048, 00:17:57.885 "data_size": 63488 00:17:57.885 } 00:17:57.885 ] 00:17:57.885 }' 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.885 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.450 "name": "raid_bdev1", 00:17:58.450 "uuid": "d33afff0-9021-414b-92dd-f63d0c9224c2", 00:17:58.450 "strip_size_kb": 64, 00:17:58.450 "state": "online", 00:17:58.450 "raid_level": "raid5f", 00:17:58.450 "superblock": true, 00:17:58.450 "num_base_bdevs": 4, 00:17:58.450 "num_base_bdevs_discovered": 3, 00:17:58.450 "num_base_bdevs_operational": 3, 00:17:58.450 "base_bdevs_list": [ 00:17:58.450 { 00:17:58.450 "name": null, 00:17:58.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.450 "is_configured": false, 00:17:58.450 "data_offset": 0, 00:17:58.450 "data_size": 63488 00:17:58.450 }, 00:17:58.450 { 00:17:58.450 "name": "BaseBdev2", 00:17:58.450 "uuid": "32e61566-c7fa-53de-8082-073ec46bd7b7", 00:17:58.450 "is_configured": true, 00:17:58.450 "data_offset": 2048, 00:17:58.450 "data_size": 63488 00:17:58.450 }, 00:17:58.450 { 00:17:58.450 "name": "BaseBdev3", 00:17:58.450 "uuid": "d6d4cc16-a68a-5ba7-a289-32fc3856522f", 00:17:58.450 "is_configured": true, 00:17:58.450 "data_offset": 2048, 00:17:58.450 "data_size": 63488 00:17:58.450 }, 00:17:58.450 { 00:17:58.450 "name": "BaseBdev4", 00:17:58.450 "uuid": "a7083231-838e-55ed-a3a9-d2fc0d3f9dde", 00:17:58.450 "is_configured": true, 00:17:58.450 "data_offset": 2048, 00:17:58.450 "data_size": 63488 00:17:58.450 } 00:17:58.450 ] 00:17:58.450 }' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85361 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85361 ']' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85361 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85361 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85361' 00:17:58.450 killing process with pid 85361 00:17:58.450 Received shutdown signal, test time was about 60.000000 seconds 00:17:58.450 00:17:58.450 Latency(us) 00:17:58.450 [2024-11-04T11:50:23.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.450 [2024-11-04T11:50:23.972Z] =================================================================================================================== 00:17:58.450 [2024-11-04T11:50:23.972Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85361 00:17:58.450 [2024-11-04 11:50:23.900063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.450 [2024-11-04 11:50:23.900193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.450 11:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85361 00:17:58.450 [2024-11-04 11:50:23.900274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.450 [2024-11-04 11:50:23.900289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:59.018 [2024-11-04 11:50:24.384583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.953 11:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:59.953 00:17:59.953 real 0m27.024s 00:17:59.953 user 0m34.025s 00:17:59.953 sys 0m2.938s 00:17:59.953 ************************************ 00:17:59.953 END TEST raid5f_rebuild_test_sb 00:17:59.953 ************************************ 00:17:59.953 11:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.953 11:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.211 11:50:25 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:00.211 11:50:25 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:00.211 11:50:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:00.212 11:50:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:00.212 11:50:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.212 ************************************ 00:18:00.212 START TEST raid_state_function_test_sb_4k 00:18:00.212 ************************************ 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86166 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86166' 00:18:00.212 Process raid pid: 86166 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86166 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86166 ']' 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.212 11:50:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.212 [2024-11-04 11:50:25.626652] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:00.212 [2024-11-04 11:50:25.626852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.470 [2024-11-04 11:50:25.798922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.470 [2024-11-04 11:50:25.911145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.729 [2024-11-04 11:50:26.116495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.729 [2024-11-04 11:50:26.116580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.987 [2024-11-04 11:50:26.457814] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.987 [2024-11-04 11:50:26.457920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.987 [2024-11-04 11:50:26.457949] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.987 [2024-11-04 11:50:26.457972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.987 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.988 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.988 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.988 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.988 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.247 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.247 "name": "Existed_Raid", 00:18:01.247 "uuid": "b2837528-aa13-47eb-9c41-396c85b3c885", 00:18:01.247 "strip_size_kb": 0, 00:18:01.247 "state": "configuring", 00:18:01.247 "raid_level": "raid1", 00:18:01.247 "superblock": true, 00:18:01.247 "num_base_bdevs": 2, 00:18:01.247 "num_base_bdevs_discovered": 0, 00:18:01.247 "num_base_bdevs_operational": 2, 00:18:01.247 "base_bdevs_list": [ 00:18:01.247 { 00:18:01.247 "name": "BaseBdev1", 00:18:01.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.247 "is_configured": false, 00:18:01.247 "data_offset": 0, 00:18:01.247 "data_size": 0 00:18:01.247 }, 00:18:01.247 { 00:18:01.247 "name": "BaseBdev2", 00:18:01.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.247 "is_configured": false, 00:18:01.247 "data_offset": 0, 00:18:01.247 "data_size": 0 00:18:01.247 } 00:18:01.247 ] 00:18:01.247 }' 00:18:01.247 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.247 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 [2024-11-04 11:50:26.909003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.507 [2024-11-04 11:50:26.909084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 [2024-11-04 11:50:26.920981] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.507 [2024-11-04 11:50:26.921058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.507 [2024-11-04 11:50:26.921086] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.507 [2024-11-04 11:50:26.921110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 [2024-11-04 11:50:26.967474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.507 BaseBdev1 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.507 11:50:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.507 [ 00:18:01.507 { 00:18:01.507 "name": "BaseBdev1", 00:18:01.507 "aliases": [ 00:18:01.507 "5f8171ca-2e69-48f5-b9bd-baaa74b7f996" 00:18:01.507 ], 00:18:01.507 "product_name": "Malloc disk", 00:18:01.507 "block_size": 4096, 00:18:01.507 "num_blocks": 8192, 00:18:01.507 "uuid": "5f8171ca-2e69-48f5-b9bd-baaa74b7f996", 00:18:01.507 "assigned_rate_limits": { 00:18:01.507 "rw_ios_per_sec": 0, 00:18:01.507 "rw_mbytes_per_sec": 0, 00:18:01.507 "r_mbytes_per_sec": 0, 00:18:01.507 "w_mbytes_per_sec": 0 00:18:01.507 }, 00:18:01.507 "claimed": true, 00:18:01.507 "claim_type": "exclusive_write", 00:18:01.507 "zoned": false, 00:18:01.507 "supported_io_types": { 00:18:01.507 "read": true, 00:18:01.507 "write": true, 00:18:01.507 "unmap": true, 00:18:01.507 "flush": true, 00:18:01.507 "reset": true, 00:18:01.507 "nvme_admin": false, 00:18:01.507 "nvme_io": false, 00:18:01.507 "nvme_io_md": false, 00:18:01.507 "write_zeroes": true, 00:18:01.507 "zcopy": true, 00:18:01.507 "get_zone_info": false, 00:18:01.507 "zone_management": false, 00:18:01.507 "zone_append": false, 00:18:01.507 "compare": false, 00:18:01.507 "compare_and_write": false, 00:18:01.507 "abort": true, 00:18:01.507 "seek_hole": false, 00:18:01.507 "seek_data": false, 00:18:01.507 "copy": true, 00:18:01.507 "nvme_iov_md": false 00:18:01.507 }, 00:18:01.507 "memory_domains": [ 00:18:01.507 { 00:18:01.507 "dma_device_id": "system", 00:18:01.507 "dma_device_type": 1 00:18:01.507 }, 00:18:01.507 { 00:18:01.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.507 "dma_device_type": 2 00:18:01.507 } 00:18:01.507 ], 00:18:01.507 "driver_specific": {} 00:18:01.507 } 00:18:01.507 ] 00:18:01.507 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.507 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:01.507 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:01.507 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.507 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.508 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.767 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.767 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.767 "name": "Existed_Raid", 00:18:01.767 "uuid": "d3e8e078-feba-49c3-8b03-3a40ad42b756", 00:18:01.767 "strip_size_kb": 0, 00:18:01.767 "state": "configuring", 00:18:01.767 "raid_level": "raid1", 00:18:01.767 "superblock": true, 00:18:01.767 "num_base_bdevs": 2, 00:18:01.767 "num_base_bdevs_discovered": 1, 00:18:01.767 "num_base_bdevs_operational": 2, 00:18:01.767 "base_bdevs_list": [ 00:18:01.767 { 00:18:01.767 "name": "BaseBdev1", 00:18:01.767 "uuid": "5f8171ca-2e69-48f5-b9bd-baaa74b7f996", 00:18:01.767 "is_configured": true, 00:18:01.767 "data_offset": 256, 00:18:01.767 "data_size": 7936 00:18:01.767 }, 00:18:01.767 { 00:18:01.767 "name": "BaseBdev2", 00:18:01.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.767 "is_configured": false, 00:18:01.767 "data_offset": 0, 00:18:01.767 "data_size": 0 00:18:01.767 } 00:18:01.767 ] 00:18:01.767 }' 00:18:01.767 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.767 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.026 [2024-11-04 11:50:27.450673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.026 [2024-11-04 11:50:27.450789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.026 [2024-11-04 11:50:27.462690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.026 [2024-11-04 11:50:27.464545] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.026 [2024-11-04 11:50:27.464586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.026 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.027 "name": "Existed_Raid", 00:18:02.027 "uuid": "6a9083e1-4725-4ef4-99f9-b7ad03f806e0", 00:18:02.027 "strip_size_kb": 0, 00:18:02.027 "state": "configuring", 00:18:02.027 "raid_level": "raid1", 00:18:02.027 "superblock": true, 00:18:02.027 "num_base_bdevs": 2, 00:18:02.027 "num_base_bdevs_discovered": 1, 00:18:02.027 "num_base_bdevs_operational": 2, 00:18:02.027 "base_bdevs_list": [ 00:18:02.027 { 00:18:02.027 "name": "BaseBdev1", 00:18:02.027 "uuid": "5f8171ca-2e69-48f5-b9bd-baaa74b7f996", 00:18:02.027 "is_configured": true, 00:18:02.027 "data_offset": 256, 00:18:02.027 "data_size": 7936 00:18:02.027 }, 00:18:02.027 { 00:18:02.027 "name": "BaseBdev2", 00:18:02.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.027 "is_configured": false, 00:18:02.027 "data_offset": 0, 00:18:02.027 "data_size": 0 00:18:02.027 } 00:18:02.027 ] 00:18:02.027 }' 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.027 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.596 [2024-11-04 11:50:27.939531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.596 [2024-11-04 11:50:27.939854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:02.596 [2024-11-04 11:50:27.939905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.596 [2024-11-04 11:50:27.940212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:02.596 BaseBdev2 00:18:02.596 [2024-11-04 11:50:27.940428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:02.596 [2024-11-04 11:50:27.940445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:02.596 [2024-11-04 11:50:27.940599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.596 [ 00:18:02.596 { 00:18:02.596 "name": "BaseBdev2", 00:18:02.596 "aliases": [ 00:18:02.596 "04560e31-d37b-4bf7-8267-9b8a15633b31" 00:18:02.596 ], 00:18:02.596 "product_name": "Malloc disk", 00:18:02.596 "block_size": 4096, 00:18:02.596 "num_blocks": 8192, 00:18:02.596 "uuid": "04560e31-d37b-4bf7-8267-9b8a15633b31", 00:18:02.596 "assigned_rate_limits": { 00:18:02.596 "rw_ios_per_sec": 0, 00:18:02.596 "rw_mbytes_per_sec": 0, 00:18:02.596 "r_mbytes_per_sec": 0, 00:18:02.596 "w_mbytes_per_sec": 0 00:18:02.596 }, 00:18:02.596 "claimed": true, 00:18:02.596 "claim_type": "exclusive_write", 00:18:02.596 "zoned": false, 00:18:02.596 "supported_io_types": { 00:18:02.596 "read": true, 00:18:02.596 "write": true, 00:18:02.596 "unmap": true, 00:18:02.596 "flush": true, 00:18:02.596 "reset": true, 00:18:02.596 "nvme_admin": false, 00:18:02.596 "nvme_io": false, 00:18:02.596 "nvme_io_md": false, 00:18:02.596 "write_zeroes": true, 00:18:02.596 "zcopy": true, 00:18:02.596 "get_zone_info": false, 00:18:02.596 "zone_management": false, 00:18:02.596 "zone_append": false, 00:18:02.596 "compare": false, 00:18:02.596 "compare_and_write": false, 00:18:02.596 "abort": true, 00:18:02.596 "seek_hole": false, 00:18:02.596 "seek_data": false, 00:18:02.596 "copy": true, 00:18:02.596 "nvme_iov_md": false 00:18:02.596 }, 00:18:02.596 "memory_domains": [ 00:18:02.596 { 00:18:02.596 "dma_device_id": "system", 00:18:02.596 "dma_device_type": 1 00:18:02.596 }, 00:18:02.596 { 00:18:02.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.596 "dma_device_type": 2 00:18:02.596 } 00:18:02.596 ], 00:18:02.596 "driver_specific": {} 00:18:02.596 } 00:18:02.596 ] 00:18:02.596 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.597 11:50:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.597 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.597 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.597 "name": "Existed_Raid", 00:18:02.597 "uuid": "6a9083e1-4725-4ef4-99f9-b7ad03f806e0", 00:18:02.597 "strip_size_kb": 0, 00:18:02.597 "state": "online", 00:18:02.597 "raid_level": "raid1", 00:18:02.597 "superblock": true, 00:18:02.597 "num_base_bdevs": 2, 00:18:02.597 "num_base_bdevs_discovered": 2, 00:18:02.597 "num_base_bdevs_operational": 2, 00:18:02.597 "base_bdevs_list": [ 00:18:02.597 { 00:18:02.597 "name": "BaseBdev1", 00:18:02.597 "uuid": "5f8171ca-2e69-48f5-b9bd-baaa74b7f996", 00:18:02.597 "is_configured": true, 00:18:02.597 "data_offset": 256, 00:18:02.597 "data_size": 7936 00:18:02.597 }, 00:18:02.597 { 00:18:02.597 "name": "BaseBdev2", 00:18:02.597 "uuid": "04560e31-d37b-4bf7-8267-9b8a15633b31", 00:18:02.597 "is_configured": true, 00:18:02.597 "data_offset": 256, 00:18:02.597 "data_size": 7936 00:18:02.597 } 00:18:02.597 ] 00:18:02.597 }' 00:18:02.597 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.597 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 [2024-11-04 11:50:28.442983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.166 "name": "Existed_Raid", 00:18:03.166 "aliases": [ 00:18:03.166 "6a9083e1-4725-4ef4-99f9-b7ad03f806e0" 00:18:03.166 ], 00:18:03.166 "product_name": "Raid Volume", 00:18:03.166 "block_size": 4096, 00:18:03.166 "num_blocks": 7936, 00:18:03.166 "uuid": "6a9083e1-4725-4ef4-99f9-b7ad03f806e0", 00:18:03.166 "assigned_rate_limits": { 00:18:03.166 "rw_ios_per_sec": 0, 00:18:03.166 "rw_mbytes_per_sec": 0, 00:18:03.166 "r_mbytes_per_sec": 0, 00:18:03.166 "w_mbytes_per_sec": 0 00:18:03.166 }, 00:18:03.166 "claimed": false, 00:18:03.166 "zoned": false, 00:18:03.166 "supported_io_types": { 00:18:03.166 "read": true, 00:18:03.166 "write": true, 00:18:03.166 "unmap": false, 00:18:03.166 "flush": false, 00:18:03.166 "reset": true, 00:18:03.166 "nvme_admin": false, 00:18:03.166 "nvme_io": false, 00:18:03.166 "nvme_io_md": false, 00:18:03.166 "write_zeroes": true, 00:18:03.166 "zcopy": false, 00:18:03.166 "get_zone_info": false, 00:18:03.166 "zone_management": false, 00:18:03.166 "zone_append": false, 00:18:03.166 "compare": false, 00:18:03.166 "compare_and_write": false, 00:18:03.166 "abort": false, 00:18:03.166 "seek_hole": false, 00:18:03.166 "seek_data": false, 00:18:03.166 "copy": false, 00:18:03.166 "nvme_iov_md": false 00:18:03.166 }, 00:18:03.166 "memory_domains": [ 00:18:03.166 { 00:18:03.166 "dma_device_id": "system", 00:18:03.166 "dma_device_type": 1 00:18:03.166 }, 00:18:03.166 { 00:18:03.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.166 "dma_device_type": 2 00:18:03.166 }, 00:18:03.166 { 00:18:03.166 "dma_device_id": "system", 00:18:03.166 "dma_device_type": 1 00:18:03.166 }, 00:18:03.166 { 00:18:03.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.166 "dma_device_type": 2 00:18:03.166 } 00:18:03.166 ], 00:18:03.166 "driver_specific": { 00:18:03.166 "raid": { 00:18:03.166 "uuid": "6a9083e1-4725-4ef4-99f9-b7ad03f806e0", 00:18:03.166 "strip_size_kb": 0, 00:18:03.166 "state": "online", 00:18:03.166 "raid_level": "raid1", 00:18:03.166 "superblock": true, 00:18:03.166 "num_base_bdevs": 2, 00:18:03.166 "num_base_bdevs_discovered": 2, 00:18:03.166 "num_base_bdevs_operational": 2, 00:18:03.166 "base_bdevs_list": [ 00:18:03.166 { 00:18:03.166 "name": "BaseBdev1", 00:18:03.166 "uuid": "5f8171ca-2e69-48f5-b9bd-baaa74b7f996", 00:18:03.166 "is_configured": true, 00:18:03.166 "data_offset": 256, 00:18:03.166 "data_size": 7936 00:18:03.166 }, 00:18:03.166 { 00:18:03.166 "name": "BaseBdev2", 00:18:03.166 "uuid": "04560e31-d37b-4bf7-8267-9b8a15633b31", 00:18:03.166 "is_configured": true, 00:18:03.166 "data_offset": 256, 00:18:03.166 "data_size": 7936 00:18:03.166 } 00:18:03.166 ] 00:18:03.166 } 00:18:03.166 } 00:18:03.166 }' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:03.166 BaseBdev2' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.166 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 [2024-11-04 11:50:28.642426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:03.425 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.426 "name": "Existed_Raid", 00:18:03.426 "uuid": "6a9083e1-4725-4ef4-99f9-b7ad03f806e0", 00:18:03.426 "strip_size_kb": 0, 00:18:03.426 "state": "online", 00:18:03.426 "raid_level": "raid1", 00:18:03.426 "superblock": true, 00:18:03.426 "num_base_bdevs": 2, 00:18:03.426 "num_base_bdevs_discovered": 1, 00:18:03.426 "num_base_bdevs_operational": 1, 00:18:03.426 "base_bdevs_list": [ 00:18:03.426 { 00:18:03.426 "name": null, 00:18:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.426 "is_configured": false, 00:18:03.426 "data_offset": 0, 00:18:03.426 "data_size": 7936 00:18:03.426 }, 00:18:03.426 { 00:18:03.426 "name": "BaseBdev2", 00:18:03.426 "uuid": "04560e31-d37b-4bf7-8267-9b8a15633b31", 00:18:03.426 "is_configured": true, 00:18:03.426 "data_offset": 256, 00:18:03.426 "data_size": 7936 00:18:03.426 } 00:18:03.426 ] 00:18:03.426 }' 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.426 11:50:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.684 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.943 [2024-11-04 11:50:29.231666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.943 [2024-11-04 11:50:29.231767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.943 [2024-11-04 11:50:29.323863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.943 [2024-11-04 11:50:29.323917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.943 [2024-11-04 11:50:29.323929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86166 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86166 ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86166 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86166 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:03.943 killing process with pid 86166 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86166' 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86166 00:18:03.943 [2024-11-04 11:50:29.417983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.943 11:50:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86166 00:18:03.943 [2024-11-04 11:50:29.435882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.323 11:50:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:05.323 00:18:05.323 real 0m4.968s 00:18:05.323 user 0m7.164s 00:18:05.323 sys 0m0.858s 00:18:05.323 11:50:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:05.323 11:50:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.323 ************************************ 00:18:05.323 END TEST raid_state_function_test_sb_4k 00:18:05.323 ************************************ 00:18:05.323 11:50:30 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:05.323 11:50:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:05.323 11:50:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.323 11:50:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.323 ************************************ 00:18:05.323 START TEST raid_superblock_test_4k 00:18:05.323 ************************************ 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:05.323 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86418 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86418 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86418 ']' 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.324 11:50:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.324 [2024-11-04 11:50:30.660127] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:05.324 [2024-11-04 11:50:30.660245] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86418 ] 00:18:05.324 [2024-11-04 11:50:30.833744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.583 [2024-11-04 11:50:30.945781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.843 [2024-11-04 11:50:31.138223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.843 [2024-11-04 11:50:31.138279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 malloc1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-11-04 11:50:31.537045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.125 [2024-11-04 11:50:31.537151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.125 [2024-11-04 11:50:31.537197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.125 [2024-11-04 11:50:31.537245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.125 [2024-11-04 11:50:31.539480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.125 [2024-11-04 11:50:31.539542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.125 pt1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 malloc2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-11-04 11:50:31.595246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:06.125 [2024-11-04 11:50:31.595301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.125 [2024-11-04 11:50:31.595321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.125 [2024-11-04 11:50:31.595329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.125 [2024-11-04 11:50:31.597325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.125 [2024-11-04 11:50:31.597360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:06.125 pt2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 [2024-11-04 11:50:31.607275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:06.125 [2024-11-04 11:50:31.609058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:06.125 [2024-11-04 11:50:31.609292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.125 [2024-11-04 11:50:31.609342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.125 [2024-11-04 11:50:31.609620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.125 [2024-11-04 11:50:31.609816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.125 [2024-11-04 11:50:31.609864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.125 [2024-11-04 11:50:31.610074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.125 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.384 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.384 "name": "raid_bdev1", 00:18:06.384 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:06.384 "strip_size_kb": 0, 00:18:06.384 "state": "online", 00:18:06.384 "raid_level": "raid1", 00:18:06.384 "superblock": true, 00:18:06.384 "num_base_bdevs": 2, 00:18:06.384 "num_base_bdevs_discovered": 2, 00:18:06.384 "num_base_bdevs_operational": 2, 00:18:06.384 "base_bdevs_list": [ 00:18:06.384 { 00:18:06.384 "name": "pt1", 00:18:06.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.384 "is_configured": true, 00:18:06.384 "data_offset": 256, 00:18:06.384 "data_size": 7936 00:18:06.384 }, 00:18:06.384 { 00:18:06.384 "name": "pt2", 00:18:06.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.384 "is_configured": true, 00:18:06.384 "data_offset": 256, 00:18:06.384 "data_size": 7936 00:18:06.384 } 00:18:06.384 ] 00:18:06.384 }' 00:18:06.384 11:50:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.384 11:50:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.643 [2024-11-04 11:50:32.058764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.643 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.643 "name": "raid_bdev1", 00:18:06.643 "aliases": [ 00:18:06.643 "b7a6034a-357d-4608-b7de-9b4616fe1176" 00:18:06.643 ], 00:18:06.643 "product_name": "Raid Volume", 00:18:06.643 "block_size": 4096, 00:18:06.643 "num_blocks": 7936, 00:18:06.643 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:06.643 "assigned_rate_limits": { 00:18:06.643 "rw_ios_per_sec": 0, 00:18:06.643 "rw_mbytes_per_sec": 0, 00:18:06.643 "r_mbytes_per_sec": 0, 00:18:06.643 "w_mbytes_per_sec": 0 00:18:06.643 }, 00:18:06.643 "claimed": false, 00:18:06.643 "zoned": false, 00:18:06.643 "supported_io_types": { 00:18:06.643 "read": true, 00:18:06.643 "write": true, 00:18:06.643 "unmap": false, 00:18:06.643 "flush": false, 00:18:06.643 "reset": true, 00:18:06.643 "nvme_admin": false, 00:18:06.643 "nvme_io": false, 00:18:06.643 "nvme_io_md": false, 00:18:06.643 "write_zeroes": true, 00:18:06.643 "zcopy": false, 00:18:06.643 "get_zone_info": false, 00:18:06.643 "zone_management": false, 00:18:06.643 "zone_append": false, 00:18:06.643 "compare": false, 00:18:06.643 "compare_and_write": false, 00:18:06.643 "abort": false, 00:18:06.643 "seek_hole": false, 00:18:06.643 "seek_data": false, 00:18:06.643 "copy": false, 00:18:06.643 "nvme_iov_md": false 00:18:06.643 }, 00:18:06.643 "memory_domains": [ 00:18:06.643 { 00:18:06.644 "dma_device_id": "system", 00:18:06.644 "dma_device_type": 1 00:18:06.644 }, 00:18:06.644 { 00:18:06.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.644 "dma_device_type": 2 00:18:06.644 }, 00:18:06.644 { 00:18:06.644 "dma_device_id": "system", 00:18:06.644 "dma_device_type": 1 00:18:06.644 }, 00:18:06.644 { 00:18:06.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.644 "dma_device_type": 2 00:18:06.644 } 00:18:06.644 ], 00:18:06.644 "driver_specific": { 00:18:06.644 "raid": { 00:18:06.644 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:06.644 "strip_size_kb": 0, 00:18:06.644 "state": "online", 00:18:06.644 "raid_level": "raid1", 00:18:06.644 "superblock": true, 00:18:06.644 "num_base_bdevs": 2, 00:18:06.644 "num_base_bdevs_discovered": 2, 00:18:06.644 "num_base_bdevs_operational": 2, 00:18:06.644 "base_bdevs_list": [ 00:18:06.644 { 00:18:06.644 "name": "pt1", 00:18:06.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:06.644 "is_configured": true, 00:18:06.644 "data_offset": 256, 00:18:06.644 "data_size": 7936 00:18:06.644 }, 00:18:06.644 { 00:18:06.644 "name": "pt2", 00:18:06.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.644 "is_configured": true, 00:18:06.644 "data_offset": 256, 00:18:06.644 "data_size": 7936 00:18:06.644 } 00:18:06.644 ] 00:18:06.644 } 00:18:06.644 } 00:18:06.644 }' 00:18:06.644 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.644 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:06.644 pt2' 00:18:06.644 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 [2024-11-04 11:50:32.266377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7a6034a-357d-4608-b7de-9b4616fe1176 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b7a6034a-357d-4608-b7de-9b4616fe1176 ']' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 [2024-11-04 11:50:32.310015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.904 [2024-11-04 11:50:32.310079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.904 [2024-11-04 11:50:32.310156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.904 [2024-11-04 11:50:32.310228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.904 [2024-11-04 11:50:32.310240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.904 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.164 [2024-11-04 11:50:32.433826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:07.164 [2024-11-04 11:50:32.435745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:07.164 [2024-11-04 11:50:32.435855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:07.164 [2024-11-04 11:50:32.435963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:07.164 [2024-11-04 11:50:32.436003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.164 [2024-11-04 11:50:32.436066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:07.164 request: 00:18:07.164 { 00:18:07.164 "name": "raid_bdev1", 00:18:07.164 "raid_level": "raid1", 00:18:07.164 "base_bdevs": [ 00:18:07.164 "malloc1", 00:18:07.164 "malloc2" 00:18:07.164 ], 00:18:07.164 "superblock": false, 00:18:07.164 "method": "bdev_raid_create", 00:18:07.164 "req_id": 1 00:18:07.164 } 00:18:07.164 Got JSON-RPC error response 00:18:07.164 response: 00:18:07.164 { 00:18:07.164 "code": -17, 00:18:07.164 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:07.164 } 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.164 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.164 [2024-11-04 11:50:32.501717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:07.164 [2024-11-04 11:50:32.501814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.164 [2024-11-04 11:50:32.501847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:07.164 [2024-11-04 11:50:32.501877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.164 [2024-11-04 11:50:32.504087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.164 [2024-11-04 11:50:32.504186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:07.165 [2024-11-04 11:50:32.504303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:07.165 [2024-11-04 11:50:32.504441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.165 pt1 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.165 "name": "raid_bdev1", 00:18:07.165 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:07.165 "strip_size_kb": 0, 00:18:07.165 "state": "configuring", 00:18:07.165 "raid_level": "raid1", 00:18:07.165 "superblock": true, 00:18:07.165 "num_base_bdevs": 2, 00:18:07.165 "num_base_bdevs_discovered": 1, 00:18:07.165 "num_base_bdevs_operational": 2, 00:18:07.165 "base_bdevs_list": [ 00:18:07.165 { 00:18:07.165 "name": "pt1", 00:18:07.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.165 "is_configured": true, 00:18:07.165 "data_offset": 256, 00:18:07.165 "data_size": 7936 00:18:07.165 }, 00:18:07.165 { 00:18:07.165 "name": null, 00:18:07.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.165 "is_configured": false, 00:18:07.165 "data_offset": 256, 00:18:07.165 "data_size": 7936 00:18:07.165 } 00:18:07.165 ] 00:18:07.165 }' 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.165 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.425 [2024-11-04 11:50:32.917057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.425 [2024-11-04 11:50:32.917127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.425 [2024-11-04 11:50:32.917150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:07.425 [2024-11-04 11:50:32.917160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.425 [2024-11-04 11:50:32.917604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.425 [2024-11-04 11:50:32.917661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.425 [2024-11-04 11:50:32.917751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:07.425 [2024-11-04 11:50:32.917786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.425 [2024-11-04 11:50:32.917914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:07.425 [2024-11-04 11:50:32.917926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:07.425 [2024-11-04 11:50:32.918150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:07.425 [2024-11-04 11:50:32.918298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:07.425 [2024-11-04 11:50:32.918307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:07.425 [2024-11-04 11:50:32.918454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.425 pt2 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.425 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.686 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.686 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.686 "name": "raid_bdev1", 00:18:07.686 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:07.686 "strip_size_kb": 0, 00:18:07.686 "state": "online", 00:18:07.686 "raid_level": "raid1", 00:18:07.686 "superblock": true, 00:18:07.686 "num_base_bdevs": 2, 00:18:07.686 "num_base_bdevs_discovered": 2, 00:18:07.686 "num_base_bdevs_operational": 2, 00:18:07.686 "base_bdevs_list": [ 00:18:07.686 { 00:18:07.686 "name": "pt1", 00:18:07.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.686 "is_configured": true, 00:18:07.686 "data_offset": 256, 00:18:07.686 "data_size": 7936 00:18:07.686 }, 00:18:07.686 { 00:18:07.686 "name": "pt2", 00:18:07.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.686 "is_configured": true, 00:18:07.686 "data_offset": 256, 00:18:07.686 "data_size": 7936 00:18:07.686 } 00:18:07.686 ] 00:18:07.686 }' 00:18:07.686 11:50:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.686 11:50:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.948 [2024-11-04 11:50:33.376568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.948 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.948 "name": "raid_bdev1", 00:18:07.948 "aliases": [ 00:18:07.948 "b7a6034a-357d-4608-b7de-9b4616fe1176" 00:18:07.948 ], 00:18:07.948 "product_name": "Raid Volume", 00:18:07.948 "block_size": 4096, 00:18:07.948 "num_blocks": 7936, 00:18:07.948 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:07.948 "assigned_rate_limits": { 00:18:07.948 "rw_ios_per_sec": 0, 00:18:07.948 "rw_mbytes_per_sec": 0, 00:18:07.948 "r_mbytes_per_sec": 0, 00:18:07.948 "w_mbytes_per_sec": 0 00:18:07.948 }, 00:18:07.948 "claimed": false, 00:18:07.948 "zoned": false, 00:18:07.948 "supported_io_types": { 00:18:07.948 "read": true, 00:18:07.948 "write": true, 00:18:07.948 "unmap": false, 00:18:07.949 "flush": false, 00:18:07.949 "reset": true, 00:18:07.949 "nvme_admin": false, 00:18:07.949 "nvme_io": false, 00:18:07.949 "nvme_io_md": false, 00:18:07.949 "write_zeroes": true, 00:18:07.949 "zcopy": false, 00:18:07.949 "get_zone_info": false, 00:18:07.949 "zone_management": false, 00:18:07.949 "zone_append": false, 00:18:07.949 "compare": false, 00:18:07.949 "compare_and_write": false, 00:18:07.949 "abort": false, 00:18:07.949 "seek_hole": false, 00:18:07.949 "seek_data": false, 00:18:07.949 "copy": false, 00:18:07.949 "nvme_iov_md": false 00:18:07.949 }, 00:18:07.949 "memory_domains": [ 00:18:07.949 { 00:18:07.949 "dma_device_id": "system", 00:18:07.949 "dma_device_type": 1 00:18:07.949 }, 00:18:07.949 { 00:18:07.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.949 "dma_device_type": 2 00:18:07.949 }, 00:18:07.949 { 00:18:07.949 "dma_device_id": "system", 00:18:07.949 "dma_device_type": 1 00:18:07.949 }, 00:18:07.949 { 00:18:07.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.949 "dma_device_type": 2 00:18:07.949 } 00:18:07.949 ], 00:18:07.949 "driver_specific": { 00:18:07.949 "raid": { 00:18:07.949 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:07.949 "strip_size_kb": 0, 00:18:07.949 "state": "online", 00:18:07.949 "raid_level": "raid1", 00:18:07.949 "superblock": true, 00:18:07.949 "num_base_bdevs": 2, 00:18:07.949 "num_base_bdevs_discovered": 2, 00:18:07.949 "num_base_bdevs_operational": 2, 00:18:07.949 "base_bdevs_list": [ 00:18:07.949 { 00:18:07.949 "name": "pt1", 00:18:07.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:07.949 "is_configured": true, 00:18:07.949 "data_offset": 256, 00:18:07.949 "data_size": 7936 00:18:07.949 }, 00:18:07.949 { 00:18:07.949 "name": "pt2", 00:18:07.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.949 "is_configured": true, 00:18:07.949 "data_offset": 256, 00:18:07.949 "data_size": 7936 00:18:07.949 } 00:18:07.949 ] 00:18:07.949 } 00:18:07.949 } 00:18:07.949 }' 00:18:07.949 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.949 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:07.949 pt2' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.208 [2024-11-04 11:50:33.648114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b7a6034a-357d-4608-b7de-9b4616fe1176 '!=' b7a6034a-357d-4608-b7de-9b4616fe1176 ']' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.208 [2024-11-04 11:50:33.695785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.208 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.467 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.468 "name": "raid_bdev1", 00:18:08.468 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:08.468 "strip_size_kb": 0, 00:18:08.468 "state": "online", 00:18:08.468 "raid_level": "raid1", 00:18:08.468 "superblock": true, 00:18:08.468 "num_base_bdevs": 2, 00:18:08.468 "num_base_bdevs_discovered": 1, 00:18:08.468 "num_base_bdevs_operational": 1, 00:18:08.468 "base_bdevs_list": [ 00:18:08.468 { 00:18:08.468 "name": null, 00:18:08.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.468 "is_configured": false, 00:18:08.468 "data_offset": 0, 00:18:08.468 "data_size": 7936 00:18:08.468 }, 00:18:08.468 { 00:18:08.468 "name": "pt2", 00:18:08.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.468 "is_configured": true, 00:18:08.468 "data_offset": 256, 00:18:08.468 "data_size": 7936 00:18:08.468 } 00:18:08.468 ] 00:18:08.468 }' 00:18:08.468 11:50:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.468 11:50:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 [2024-11-04 11:50:34.103072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.728 [2024-11-04 11:50:34.103169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.728 [2024-11-04 11:50:34.103281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.728 [2024-11-04 11:50:34.103407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.728 [2024-11-04 11:50:34.103465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 [2024-11-04 11:50:34.166937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.728 [2024-11-04 11:50:34.167053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.728 [2024-11-04 11:50:34.167078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:08.728 [2024-11-04 11:50:34.167091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.728 [2024-11-04 11:50:34.169521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.728 [2024-11-04 11:50:34.169559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.728 [2024-11-04 11:50:34.169644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.728 [2024-11-04 11:50:34.169692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.728 [2024-11-04 11:50:34.169811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:08.728 [2024-11-04 11:50:34.169829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.728 [2024-11-04 11:50:34.170104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:08.728 [2024-11-04 11:50:34.170271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:08.728 [2024-11-04 11:50:34.170281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:08.728 [2024-11-04 11:50:34.170500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.728 pt2 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.728 "name": "raid_bdev1", 00:18:08.728 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:08.728 "strip_size_kb": 0, 00:18:08.728 "state": "online", 00:18:08.728 "raid_level": "raid1", 00:18:08.728 "superblock": true, 00:18:08.728 "num_base_bdevs": 2, 00:18:08.728 "num_base_bdevs_discovered": 1, 00:18:08.728 "num_base_bdevs_operational": 1, 00:18:08.728 "base_bdevs_list": [ 00:18:08.728 { 00:18:08.728 "name": null, 00:18:08.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.728 "is_configured": false, 00:18:08.728 "data_offset": 256, 00:18:08.728 "data_size": 7936 00:18:08.728 }, 00:18:08.728 { 00:18:08.728 "name": "pt2", 00:18:08.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.728 "is_configured": true, 00:18:08.728 "data_offset": 256, 00:18:08.728 "data_size": 7936 00:18:08.728 } 00:18:08.728 ] 00:18:08.728 }' 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.728 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 [2024-11-04 11:50:34.658102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.298 [2024-11-04 11:50:34.658134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.298 [2024-11-04 11:50:34.658208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.298 [2024-11-04 11:50:34.658256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.298 [2024-11-04 11:50:34.658266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 [2024-11-04 11:50:34.718023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.298 [2024-11-04 11:50:34.718149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.298 [2024-11-04 11:50:34.718191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:09.298 [2024-11-04 11:50:34.718199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.298 [2024-11-04 11:50:34.720431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.298 [2024-11-04 11:50:34.720467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.298 [2024-11-04 11:50:34.720550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.298 [2024-11-04 11:50:34.720618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.298 [2024-11-04 11:50:34.720801] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:09.298 [2024-11-04 11:50:34.720818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.298 [2024-11-04 11:50:34.720835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:09.298 [2024-11-04 11:50:34.720895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.298 [2024-11-04 11:50:34.720978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:09.298 [2024-11-04 11:50:34.720986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.298 [2024-11-04 11:50:34.721249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.298 [2024-11-04 11:50:34.721472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:09.298 [2024-11-04 11:50:34.721491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:09.298 [2024-11-04 11:50:34.721646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.298 pt1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.298 "name": "raid_bdev1", 00:18:09.298 "uuid": "b7a6034a-357d-4608-b7de-9b4616fe1176", 00:18:09.298 "strip_size_kb": 0, 00:18:09.298 "state": "online", 00:18:09.298 "raid_level": "raid1", 00:18:09.298 "superblock": true, 00:18:09.298 "num_base_bdevs": 2, 00:18:09.298 "num_base_bdevs_discovered": 1, 00:18:09.298 "num_base_bdevs_operational": 1, 00:18:09.298 "base_bdevs_list": [ 00:18:09.298 { 00:18:09.298 "name": null, 00:18:09.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.298 "is_configured": false, 00:18:09.298 "data_offset": 256, 00:18:09.298 "data_size": 7936 00:18:09.298 }, 00:18:09.298 { 00:18:09.298 "name": "pt2", 00:18:09.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.298 "is_configured": true, 00:18:09.298 "data_offset": 256, 00:18:09.298 "data_size": 7936 00:18:09.298 } 00:18:09.298 ] 00:18:09.298 }' 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.298 11:50:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.867 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:09.868 [2024-11-04 11:50:35.209422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b7a6034a-357d-4608-b7de-9b4616fe1176 '!=' b7a6034a-357d-4608-b7de-9b4616fe1176 ']' 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86418 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86418 ']' 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86418 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86418 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.868 killing process with pid 86418 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86418' 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86418 00:18:09.868 [2024-11-04 11:50:35.292664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.868 [2024-11-04 11:50:35.292758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.868 [2024-11-04 11:50:35.292804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.868 [2024-11-04 11:50:35.292818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:09.868 11:50:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86418 00:18:10.127 [2024-11-04 11:50:35.492037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.065 11:50:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:11.065 00:18:11.065 real 0m5.991s 00:18:11.065 user 0m9.109s 00:18:11.065 sys 0m1.056s 00:18:11.065 ************************************ 00:18:11.065 END TEST raid_superblock_test_4k 00:18:11.065 ************************************ 00:18:11.065 11:50:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:11.065 11:50:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.325 11:50:36 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:11.325 11:50:36 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:11.325 11:50:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:11.325 11:50:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:11.325 11:50:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.325 ************************************ 00:18:11.325 START TEST raid_rebuild_test_sb_4k 00:18:11.325 ************************************ 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86741 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86741 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86741 ']' 00:18:11.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:11.325 11:50:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:11.325 Zero copy mechanism will not be used. 00:18:11.325 [2024-11-04 11:50:36.722415] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:11.325 [2024-11-04 11:50:36.722527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86741 ] 00:18:11.585 [2024-11-04 11:50:36.878168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.585 [2024-11-04 11:50:36.988224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.851 [2024-11-04 11:50:37.186326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.851 [2024-11-04 11:50:37.186433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.125 BaseBdev1_malloc 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.125 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.126 [2024-11-04 11:50:37.594006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.126 [2024-11-04 11:50:37.594082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.126 [2024-11-04 11:50:37.594106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.126 [2024-11-04 11:50:37.594117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.126 [2024-11-04 11:50:37.596272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.126 [2024-11-04 11:50:37.596314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.126 BaseBdev1 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.126 BaseBdev2_malloc 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.126 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 [2024-11-04 11:50:37.648971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.385 [2024-11-04 11:50:37.649114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.385 [2024-11-04 11:50:37.649155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.385 [2024-11-04 11:50:37.649171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.385 [2024-11-04 11:50:37.651388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.385 [2024-11-04 11:50:37.651439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.385 BaseBdev2 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 spare_malloc 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 spare_delay 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 [2024-11-04 11:50:37.727114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.385 [2024-11-04 11:50:37.727178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.385 [2024-11-04 11:50:37.727199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:12.385 [2024-11-04 11:50:37.727209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.385 [2024-11-04 11:50:37.729485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.385 [2024-11-04 11:50:37.729564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.385 spare 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.385 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.385 [2024-11-04 11:50:37.739166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.385 [2024-11-04 11:50:37.741065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.385 [2024-11-04 11:50:37.741405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.385 [2024-11-04 11:50:37.741427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.386 [2024-11-04 11:50:37.741674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:12.386 [2024-11-04 11:50:37.741850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.386 [2024-11-04 11:50:37.741859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.386 [2024-11-04 11:50:37.742016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.386 "name": "raid_bdev1", 00:18:12.386 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:12.386 "strip_size_kb": 0, 00:18:12.386 "state": "online", 00:18:12.386 "raid_level": "raid1", 00:18:12.386 "superblock": true, 00:18:12.386 "num_base_bdevs": 2, 00:18:12.386 "num_base_bdevs_discovered": 2, 00:18:12.386 "num_base_bdevs_operational": 2, 00:18:12.386 "base_bdevs_list": [ 00:18:12.386 { 00:18:12.386 "name": "BaseBdev1", 00:18:12.386 "uuid": "4c71ec9f-63d9-52d4-b5e4-ffdf39d603c8", 00:18:12.386 "is_configured": true, 00:18:12.386 "data_offset": 256, 00:18:12.386 "data_size": 7936 00:18:12.386 }, 00:18:12.386 { 00:18:12.386 "name": "BaseBdev2", 00:18:12.386 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:12.386 "is_configured": true, 00:18:12.386 "data_offset": 256, 00:18:12.386 "data_size": 7936 00:18:12.386 } 00:18:12.386 ] 00:18:12.386 }' 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.386 11:50:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.955 [2024-11-04 11:50:38.206670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:12.955 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:13.215 [2024-11-04 11:50:38.485909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:13.215 /dev/nbd0 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:13.215 1+0 records in 00:18:13.215 1+0 records out 00:18:13.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459426 s, 8.9 MB/s 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:13.215 11:50:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:13.783 7936+0 records in 00:18:13.783 7936+0 records out 00:18:13.783 32505856 bytes (33 MB, 31 MiB) copied, 0.608442 s, 53.4 MB/s 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.783 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.043 [2024-11-04 11:50:39.405547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.043 [2024-11-04 11:50:39.417658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.043 "name": "raid_bdev1", 00:18:14.043 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:14.043 "strip_size_kb": 0, 00:18:14.043 "state": "online", 00:18:14.043 "raid_level": "raid1", 00:18:14.043 "superblock": true, 00:18:14.043 "num_base_bdevs": 2, 00:18:14.043 "num_base_bdevs_discovered": 1, 00:18:14.043 "num_base_bdevs_operational": 1, 00:18:14.043 "base_bdevs_list": [ 00:18:14.043 { 00:18:14.043 "name": null, 00:18:14.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.043 "is_configured": false, 00:18:14.043 "data_offset": 0, 00:18:14.043 "data_size": 7936 00:18:14.043 }, 00:18:14.043 { 00:18:14.043 "name": "BaseBdev2", 00:18:14.043 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:14.043 "is_configured": true, 00:18:14.043 "data_offset": 256, 00:18:14.043 "data_size": 7936 00:18:14.043 } 00:18:14.043 ] 00:18:14.043 }' 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.043 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.615 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.615 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 [2024-11-04 11:50:39.888842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.615 [2024-11-04 11:50:39.906442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:14.615 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.615 11:50:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:14.615 [2024-11-04 11:50:39.908484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.554 "name": "raid_bdev1", 00:18:15.554 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:15.554 "strip_size_kb": 0, 00:18:15.554 "state": "online", 00:18:15.554 "raid_level": "raid1", 00:18:15.554 "superblock": true, 00:18:15.554 "num_base_bdevs": 2, 00:18:15.554 "num_base_bdevs_discovered": 2, 00:18:15.554 "num_base_bdevs_operational": 2, 00:18:15.554 "process": { 00:18:15.554 "type": "rebuild", 00:18:15.554 "target": "spare", 00:18:15.554 "progress": { 00:18:15.554 "blocks": 2560, 00:18:15.554 "percent": 32 00:18:15.554 } 00:18:15.554 }, 00:18:15.554 "base_bdevs_list": [ 00:18:15.554 { 00:18:15.554 "name": "spare", 00:18:15.554 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:15.554 "is_configured": true, 00:18:15.554 "data_offset": 256, 00:18:15.554 "data_size": 7936 00:18:15.554 }, 00:18:15.554 { 00:18:15.554 "name": "BaseBdev2", 00:18:15.554 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:15.554 "is_configured": true, 00:18:15.554 "data_offset": 256, 00:18:15.554 "data_size": 7936 00:18:15.554 } 00:18:15.554 ] 00:18:15.554 }' 00:18:15.554 11:50:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.554 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.554 [2024-11-04 11:50:41.063850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.813 [2024-11-04 11:50:41.113427] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.814 [2024-11-04 11:50:41.113491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.814 [2024-11-04 11:50:41.113505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.814 [2024-11-04 11:50:41.113515] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.814 "name": "raid_bdev1", 00:18:15.814 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:15.814 "strip_size_kb": 0, 00:18:15.814 "state": "online", 00:18:15.814 "raid_level": "raid1", 00:18:15.814 "superblock": true, 00:18:15.814 "num_base_bdevs": 2, 00:18:15.814 "num_base_bdevs_discovered": 1, 00:18:15.814 "num_base_bdevs_operational": 1, 00:18:15.814 "base_bdevs_list": [ 00:18:15.814 { 00:18:15.814 "name": null, 00:18:15.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.814 "is_configured": false, 00:18:15.814 "data_offset": 0, 00:18:15.814 "data_size": 7936 00:18:15.814 }, 00:18:15.814 { 00:18:15.814 "name": "BaseBdev2", 00:18:15.814 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:15.814 "is_configured": true, 00:18:15.814 "data_offset": 256, 00:18:15.814 "data_size": 7936 00:18:15.814 } 00:18:15.814 ] 00:18:15.814 }' 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.814 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.073 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.073 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.073 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.073 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.073 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.332 "name": "raid_bdev1", 00:18:16.332 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:16.332 "strip_size_kb": 0, 00:18:16.332 "state": "online", 00:18:16.332 "raid_level": "raid1", 00:18:16.332 "superblock": true, 00:18:16.332 "num_base_bdevs": 2, 00:18:16.332 "num_base_bdevs_discovered": 1, 00:18:16.332 "num_base_bdevs_operational": 1, 00:18:16.332 "base_bdevs_list": [ 00:18:16.332 { 00:18:16.332 "name": null, 00:18:16.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.332 "is_configured": false, 00:18:16.332 "data_offset": 0, 00:18:16.332 "data_size": 7936 00:18:16.332 }, 00:18:16.332 { 00:18:16.332 "name": "BaseBdev2", 00:18:16.332 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:16.332 "is_configured": true, 00:18:16.332 "data_offset": 256, 00:18:16.332 "data_size": 7936 00:18:16.332 } 00:18:16.332 ] 00:18:16.332 }' 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.332 [2024-11-04 11:50:41.736789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.332 [2024-11-04 11:50:41.752865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.332 11:50:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:16.332 [2024-11-04 11:50:41.754715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.270 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.271 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.271 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.271 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.271 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.530 "name": "raid_bdev1", 00:18:17.530 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:17.530 "strip_size_kb": 0, 00:18:17.530 "state": "online", 00:18:17.530 "raid_level": "raid1", 00:18:17.530 "superblock": true, 00:18:17.530 "num_base_bdevs": 2, 00:18:17.530 "num_base_bdevs_discovered": 2, 00:18:17.530 "num_base_bdevs_operational": 2, 00:18:17.530 "process": { 00:18:17.530 "type": "rebuild", 00:18:17.530 "target": "spare", 00:18:17.530 "progress": { 00:18:17.530 "blocks": 2560, 00:18:17.530 "percent": 32 00:18:17.530 } 00:18:17.530 }, 00:18:17.530 "base_bdevs_list": [ 00:18:17.530 { 00:18:17.530 "name": "spare", 00:18:17.530 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 256, 00:18:17.530 "data_size": 7936 00:18:17.530 }, 00:18:17.530 { 00:18:17.530 "name": "BaseBdev2", 00:18:17.530 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 256, 00:18:17.530 "data_size": 7936 00:18:17.530 } 00:18:17.530 ] 00:18:17.530 }' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:17.530 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=684 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.530 "name": "raid_bdev1", 00:18:17.530 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:17.530 "strip_size_kb": 0, 00:18:17.530 "state": "online", 00:18:17.530 "raid_level": "raid1", 00:18:17.530 "superblock": true, 00:18:17.530 "num_base_bdevs": 2, 00:18:17.530 "num_base_bdevs_discovered": 2, 00:18:17.530 "num_base_bdevs_operational": 2, 00:18:17.530 "process": { 00:18:17.530 "type": "rebuild", 00:18:17.530 "target": "spare", 00:18:17.530 "progress": { 00:18:17.530 "blocks": 2816, 00:18:17.530 "percent": 35 00:18:17.530 } 00:18:17.530 }, 00:18:17.530 "base_bdevs_list": [ 00:18:17.530 { 00:18:17.530 "name": "spare", 00:18:17.530 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 256, 00:18:17.530 "data_size": 7936 00:18:17.530 }, 00:18:17.530 { 00:18:17.530 "name": "BaseBdev2", 00:18:17.530 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 256, 00:18:17.530 "data_size": 7936 00:18:17.530 } 00:18:17.530 ] 00:18:17.530 }' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.530 11:50:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.531 11:50:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.531 11:50:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.910 "name": "raid_bdev1", 00:18:18.910 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:18.910 "strip_size_kb": 0, 00:18:18.910 "state": "online", 00:18:18.910 "raid_level": "raid1", 00:18:18.910 "superblock": true, 00:18:18.910 "num_base_bdevs": 2, 00:18:18.910 "num_base_bdevs_discovered": 2, 00:18:18.910 "num_base_bdevs_operational": 2, 00:18:18.910 "process": { 00:18:18.910 "type": "rebuild", 00:18:18.910 "target": "spare", 00:18:18.910 "progress": { 00:18:18.910 "blocks": 5632, 00:18:18.910 "percent": 70 00:18:18.910 } 00:18:18.910 }, 00:18:18.910 "base_bdevs_list": [ 00:18:18.910 { 00:18:18.910 "name": "spare", 00:18:18.910 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:18.910 "is_configured": true, 00:18:18.910 "data_offset": 256, 00:18:18.910 "data_size": 7936 00:18:18.910 }, 00:18:18.910 { 00:18:18.910 "name": "BaseBdev2", 00:18:18.910 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:18.910 "is_configured": true, 00:18:18.910 "data_offset": 256, 00:18:18.910 "data_size": 7936 00:18:18.910 } 00:18:18.910 ] 00:18:18.910 }' 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.910 11:50:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.479 [2024-11-04 11:50:44.867535] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:19.479 [2024-11-04 11:50:44.867688] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:19.479 [2024-11-04 11:50:44.867836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.737 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.737 "name": "raid_bdev1", 00:18:19.737 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:19.737 "strip_size_kb": 0, 00:18:19.737 "state": "online", 00:18:19.737 "raid_level": "raid1", 00:18:19.737 "superblock": true, 00:18:19.737 "num_base_bdevs": 2, 00:18:19.737 "num_base_bdevs_discovered": 2, 00:18:19.737 "num_base_bdevs_operational": 2, 00:18:19.737 "base_bdevs_list": [ 00:18:19.737 { 00:18:19.737 "name": "spare", 00:18:19.737 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:19.737 "is_configured": true, 00:18:19.737 "data_offset": 256, 00:18:19.737 "data_size": 7936 00:18:19.737 }, 00:18:19.737 { 00:18:19.737 "name": "BaseBdev2", 00:18:19.737 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:19.737 "is_configured": true, 00:18:19.737 "data_offset": 256, 00:18:19.737 "data_size": 7936 00:18:19.737 } 00:18:19.737 ] 00:18:19.737 }' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.997 "name": "raid_bdev1", 00:18:19.997 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:19.997 "strip_size_kb": 0, 00:18:19.997 "state": "online", 00:18:19.997 "raid_level": "raid1", 00:18:19.997 "superblock": true, 00:18:19.997 "num_base_bdevs": 2, 00:18:19.997 "num_base_bdevs_discovered": 2, 00:18:19.997 "num_base_bdevs_operational": 2, 00:18:19.997 "base_bdevs_list": [ 00:18:19.997 { 00:18:19.997 "name": "spare", 00:18:19.997 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:19.997 "is_configured": true, 00:18:19.997 "data_offset": 256, 00:18:19.997 "data_size": 7936 00:18:19.997 }, 00:18:19.997 { 00:18:19.997 "name": "BaseBdev2", 00:18:19.997 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:19.997 "is_configured": true, 00:18:19.997 "data_offset": 256, 00:18:19.997 "data_size": 7936 00:18:19.997 } 00:18:19.997 ] 00:18:19.997 }' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.997 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.998 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.257 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.257 "name": "raid_bdev1", 00:18:20.257 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:20.257 "strip_size_kb": 0, 00:18:20.257 "state": "online", 00:18:20.257 "raid_level": "raid1", 00:18:20.257 "superblock": true, 00:18:20.257 "num_base_bdevs": 2, 00:18:20.257 "num_base_bdevs_discovered": 2, 00:18:20.257 "num_base_bdevs_operational": 2, 00:18:20.257 "base_bdevs_list": [ 00:18:20.257 { 00:18:20.257 "name": "spare", 00:18:20.257 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:20.257 "is_configured": true, 00:18:20.257 "data_offset": 256, 00:18:20.257 "data_size": 7936 00:18:20.257 }, 00:18:20.257 { 00:18:20.257 "name": "BaseBdev2", 00:18:20.257 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:20.257 "is_configured": true, 00:18:20.257 "data_offset": 256, 00:18:20.257 "data_size": 7936 00:18:20.257 } 00:18:20.257 ] 00:18:20.257 }' 00:18:20.257 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.257 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.516 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:20.516 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.517 [2024-11-04 11:50:45.941700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.517 [2024-11-04 11:50:45.941732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.517 [2024-11-04 11:50:45.941821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.517 [2024-11-04 11:50:45.941886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.517 [2024-11-04 11:50:45.941899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:20.517 11:50:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:20.776 /dev/nbd0 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:20.776 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.777 1+0 records in 00:18:20.777 1+0 records out 00:18:20.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446139 s, 9.2 MB/s 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:20.777 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:21.036 /dev/nbd1 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.036 1+0 records in 00:18:21.036 1+0 records out 00:18:21.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279338 s, 14.7 MB/s 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:21.036 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.296 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:21.555 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:21.555 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.556 11:50:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.816 [2024-11-04 11:50:47.149052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:21.816 [2024-11-04 11:50:47.149125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.816 [2024-11-04 11:50:47.149147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:21.816 [2024-11-04 11:50:47.149157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.816 [2024-11-04 11:50:47.151389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.816 [2024-11-04 11:50:47.151490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:21.816 [2024-11-04 11:50:47.151592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:21.816 [2024-11-04 11:50:47.151659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.816 [2024-11-04 11:50:47.151827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:21.816 spare 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.816 [2024-11-04 11:50:47.251733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:21.816 [2024-11-04 11:50:47.251773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:21.816 [2024-11-04 11:50:47.252076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:21.816 [2024-11-04 11:50:47.252263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:21.816 [2024-11-04 11:50:47.252273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:21.816 [2024-11-04 11:50:47.252473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.816 "name": "raid_bdev1", 00:18:21.816 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:21.816 "strip_size_kb": 0, 00:18:21.816 "state": "online", 00:18:21.816 "raid_level": "raid1", 00:18:21.816 "superblock": true, 00:18:21.816 "num_base_bdevs": 2, 00:18:21.816 "num_base_bdevs_discovered": 2, 00:18:21.816 "num_base_bdevs_operational": 2, 00:18:21.816 "base_bdevs_list": [ 00:18:21.816 { 00:18:21.816 "name": "spare", 00:18:21.816 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:21.816 "is_configured": true, 00:18:21.816 "data_offset": 256, 00:18:21.816 "data_size": 7936 00:18:21.816 }, 00:18:21.816 { 00:18:21.816 "name": "BaseBdev2", 00:18:21.816 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:21.816 "is_configured": true, 00:18:21.816 "data_offset": 256, 00:18:21.816 "data_size": 7936 00:18:21.816 } 00:18:21.816 ] 00:18:21.816 }' 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.816 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.385 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.386 "name": "raid_bdev1", 00:18:22.386 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:22.386 "strip_size_kb": 0, 00:18:22.386 "state": "online", 00:18:22.386 "raid_level": "raid1", 00:18:22.386 "superblock": true, 00:18:22.386 "num_base_bdevs": 2, 00:18:22.386 "num_base_bdevs_discovered": 2, 00:18:22.386 "num_base_bdevs_operational": 2, 00:18:22.386 "base_bdevs_list": [ 00:18:22.386 { 00:18:22.386 "name": "spare", 00:18:22.386 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:22.386 "is_configured": true, 00:18:22.386 "data_offset": 256, 00:18:22.386 "data_size": 7936 00:18:22.386 }, 00:18:22.386 { 00:18:22.386 "name": "BaseBdev2", 00:18:22.386 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:22.386 "is_configured": true, 00:18:22.386 "data_offset": 256, 00:18:22.386 "data_size": 7936 00:18:22.386 } 00:18:22.386 ] 00:18:22.386 }' 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.386 [2024-11-04 11:50:47.891869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.386 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.645 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.645 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.645 "name": "raid_bdev1", 00:18:22.645 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:22.645 "strip_size_kb": 0, 00:18:22.645 "state": "online", 00:18:22.645 "raid_level": "raid1", 00:18:22.645 "superblock": true, 00:18:22.645 "num_base_bdevs": 2, 00:18:22.645 "num_base_bdevs_discovered": 1, 00:18:22.645 "num_base_bdevs_operational": 1, 00:18:22.645 "base_bdevs_list": [ 00:18:22.645 { 00:18:22.645 "name": null, 00:18:22.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.645 "is_configured": false, 00:18:22.645 "data_offset": 0, 00:18:22.645 "data_size": 7936 00:18:22.645 }, 00:18:22.645 { 00:18:22.645 "name": "BaseBdev2", 00:18:22.645 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:22.645 "is_configured": true, 00:18:22.645 "data_offset": 256, 00:18:22.645 "data_size": 7936 00:18:22.645 } 00:18:22.645 ] 00:18:22.645 }' 00:18:22.645 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.645 11:50:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.905 11:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.905 11:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.905 11:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.905 [2024-11-04 11:50:48.343111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.905 [2024-11-04 11:50:48.343411] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:22.905 [2024-11-04 11:50:48.343431] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:22.905 [2024-11-04 11:50:48.343469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.905 [2024-11-04 11:50:48.359913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:22.905 11:50:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.905 11:50:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:22.905 [2024-11-04 11:50:48.361778] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.282 "name": "raid_bdev1", 00:18:24.282 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:24.282 "strip_size_kb": 0, 00:18:24.282 "state": "online", 00:18:24.282 "raid_level": "raid1", 00:18:24.282 "superblock": true, 00:18:24.282 "num_base_bdevs": 2, 00:18:24.282 "num_base_bdevs_discovered": 2, 00:18:24.282 "num_base_bdevs_operational": 2, 00:18:24.282 "process": { 00:18:24.282 "type": "rebuild", 00:18:24.282 "target": "spare", 00:18:24.282 "progress": { 00:18:24.282 "blocks": 2560, 00:18:24.282 "percent": 32 00:18:24.282 } 00:18:24.282 }, 00:18:24.282 "base_bdevs_list": [ 00:18:24.282 { 00:18:24.282 "name": "spare", 00:18:24.282 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:24.282 "is_configured": true, 00:18:24.282 "data_offset": 256, 00:18:24.282 "data_size": 7936 00:18:24.282 }, 00:18:24.282 { 00:18:24.282 "name": "BaseBdev2", 00:18:24.282 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:24.282 "is_configured": true, 00:18:24.282 "data_offset": 256, 00:18:24.282 "data_size": 7936 00:18:24.282 } 00:18:24.282 ] 00:18:24.282 }' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.282 [2024-11-04 11:50:49.521431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.282 [2024-11-04 11:50:49.566830] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:24.282 [2024-11-04 11:50:49.566888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.282 [2024-11-04 11:50:49.566903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:24.282 [2024-11-04 11:50:49.566911] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.282 "name": "raid_bdev1", 00:18:24.282 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:24.282 "strip_size_kb": 0, 00:18:24.282 "state": "online", 00:18:24.282 "raid_level": "raid1", 00:18:24.282 "superblock": true, 00:18:24.282 "num_base_bdevs": 2, 00:18:24.282 "num_base_bdevs_discovered": 1, 00:18:24.282 "num_base_bdevs_operational": 1, 00:18:24.282 "base_bdevs_list": [ 00:18:24.282 { 00:18:24.282 "name": null, 00:18:24.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.282 "is_configured": false, 00:18:24.282 "data_offset": 0, 00:18:24.282 "data_size": 7936 00:18:24.282 }, 00:18:24.282 { 00:18:24.282 "name": "BaseBdev2", 00:18:24.282 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:24.282 "is_configured": true, 00:18:24.282 "data_offset": 256, 00:18:24.282 "data_size": 7936 00:18:24.282 } 00:18:24.282 ] 00:18:24.282 }' 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.282 11:50:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.851 11:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:24.851 11:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.851 11:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.851 [2024-11-04 11:50:50.073106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:24.851 [2024-11-04 11:50:50.073257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.851 [2024-11-04 11:50:50.073310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:24.851 [2024-11-04 11:50:50.073349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.851 [2024-11-04 11:50:50.073862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.851 [2024-11-04 11:50:50.073932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:24.851 [2024-11-04 11:50:50.074070] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:24.851 [2024-11-04 11:50:50.074121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:24.851 [2024-11-04 11:50:50.074174] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:24.851 [2024-11-04 11:50:50.074229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.851 [2024-11-04 11:50:50.090268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:24.851 spare 00:18:24.851 11:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.851 [2024-11-04 11:50:50.092164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.851 11:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.788 "name": "raid_bdev1", 00:18:25.788 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:25.788 "strip_size_kb": 0, 00:18:25.788 "state": "online", 00:18:25.788 "raid_level": "raid1", 00:18:25.788 "superblock": true, 00:18:25.788 "num_base_bdevs": 2, 00:18:25.788 "num_base_bdevs_discovered": 2, 00:18:25.788 "num_base_bdevs_operational": 2, 00:18:25.788 "process": { 00:18:25.788 "type": "rebuild", 00:18:25.788 "target": "spare", 00:18:25.788 "progress": { 00:18:25.788 "blocks": 2560, 00:18:25.788 "percent": 32 00:18:25.788 } 00:18:25.788 }, 00:18:25.788 "base_bdevs_list": [ 00:18:25.788 { 00:18:25.788 "name": "spare", 00:18:25.788 "uuid": "dca7c0d1-3a12-556b-b0fd-edb093b603e3", 00:18:25.788 "is_configured": true, 00:18:25.788 "data_offset": 256, 00:18:25.788 "data_size": 7936 00:18:25.788 }, 00:18:25.788 { 00:18:25.788 "name": "BaseBdev2", 00:18:25.788 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:25.788 "is_configured": true, 00:18:25.788 "data_offset": 256, 00:18:25.788 "data_size": 7936 00:18:25.788 } 00:18:25.788 ] 00:18:25.788 }' 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.788 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.788 [2024-11-04 11:50:51.232522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.788 [2024-11-04 11:50:51.297402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:25.788 [2024-11-04 11:50:51.297584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.788 [2024-11-04 11:50:51.297625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:25.788 [2024-11-04 11:50:51.297648] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.048 "name": "raid_bdev1", 00:18:26.048 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:26.048 "strip_size_kb": 0, 00:18:26.048 "state": "online", 00:18:26.048 "raid_level": "raid1", 00:18:26.048 "superblock": true, 00:18:26.048 "num_base_bdevs": 2, 00:18:26.048 "num_base_bdevs_discovered": 1, 00:18:26.048 "num_base_bdevs_operational": 1, 00:18:26.048 "base_bdevs_list": [ 00:18:26.048 { 00:18:26.048 "name": null, 00:18:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.048 "is_configured": false, 00:18:26.048 "data_offset": 0, 00:18:26.048 "data_size": 7936 00:18:26.048 }, 00:18:26.048 { 00:18:26.048 "name": "BaseBdev2", 00:18:26.048 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:26.048 "is_configured": true, 00:18:26.048 "data_offset": 256, 00:18:26.048 "data_size": 7936 00:18:26.048 } 00:18:26.048 ] 00:18:26.048 }' 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.048 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.308 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.567 "name": "raid_bdev1", 00:18:26.567 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:26.567 "strip_size_kb": 0, 00:18:26.567 "state": "online", 00:18:26.567 "raid_level": "raid1", 00:18:26.567 "superblock": true, 00:18:26.567 "num_base_bdevs": 2, 00:18:26.567 "num_base_bdevs_discovered": 1, 00:18:26.567 "num_base_bdevs_operational": 1, 00:18:26.567 "base_bdevs_list": [ 00:18:26.567 { 00:18:26.567 "name": null, 00:18:26.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.567 "is_configured": false, 00:18:26.567 "data_offset": 0, 00:18:26.567 "data_size": 7936 00:18:26.567 }, 00:18:26.567 { 00:18:26.567 "name": "BaseBdev2", 00:18:26.567 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:26.567 "is_configured": true, 00:18:26.567 "data_offset": 256, 00:18:26.567 "data_size": 7936 00:18:26.567 } 00:18:26.567 ] 00:18:26.567 }' 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:26.567 [2024-11-04 11:50:51.945307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.567 [2024-11-04 11:50:51.945368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.567 [2024-11-04 11:50:51.945392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:26.567 [2024-11-04 11:50:51.945428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.567 [2024-11-04 11:50:51.945949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.567 [2024-11-04 11:50:51.945975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.567 [2024-11-04 11:50:51.946067] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:26.567 [2024-11-04 11:50:51.946082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:26.567 [2024-11-04 11:50:51.946092] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.567 [2024-11-04 11:50:51.946102] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:26.567 BaseBdev1 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.567 11:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.506 11:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.506 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.506 "name": "raid_bdev1", 00:18:27.506 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:27.506 "strip_size_kb": 0, 00:18:27.506 "state": "online", 00:18:27.506 "raid_level": "raid1", 00:18:27.506 "superblock": true, 00:18:27.506 "num_base_bdevs": 2, 00:18:27.506 "num_base_bdevs_discovered": 1, 00:18:27.506 "num_base_bdevs_operational": 1, 00:18:27.506 "base_bdevs_list": [ 00:18:27.506 { 00:18:27.506 "name": null, 00:18:27.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.506 "is_configured": false, 00:18:27.506 "data_offset": 0, 00:18:27.506 "data_size": 7936 00:18:27.506 }, 00:18:27.506 { 00:18:27.506 "name": "BaseBdev2", 00:18:27.506 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:27.506 "is_configured": true, 00:18:27.506 "data_offset": 256, 00:18:27.506 "data_size": 7936 00:18:27.506 } 00:18:27.506 ] 00:18:27.506 }' 00:18:27.506 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.506 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.073 "name": "raid_bdev1", 00:18:28.073 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:28.073 "strip_size_kb": 0, 00:18:28.073 "state": "online", 00:18:28.073 "raid_level": "raid1", 00:18:28.073 "superblock": true, 00:18:28.073 "num_base_bdevs": 2, 00:18:28.073 "num_base_bdevs_discovered": 1, 00:18:28.073 "num_base_bdevs_operational": 1, 00:18:28.073 "base_bdevs_list": [ 00:18:28.073 { 00:18:28.073 "name": null, 00:18:28.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.073 "is_configured": false, 00:18:28.073 "data_offset": 0, 00:18:28.073 "data_size": 7936 00:18:28.073 }, 00:18:28.073 { 00:18:28.073 "name": "BaseBdev2", 00:18:28.073 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:28.073 "is_configured": true, 00:18:28.073 "data_offset": 256, 00:18:28.073 "data_size": 7936 00:18:28.073 } 00:18:28.073 ] 00:18:28.073 }' 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.073 [2024-11-04 11:50:53.558753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.073 [2024-11-04 11:50:53.559016] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:28.073 [2024-11-04 11:50:53.559038] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:28.073 request: 00:18:28.073 { 00:18:28.073 "base_bdev": "BaseBdev1", 00:18:28.073 "raid_bdev": "raid_bdev1", 00:18:28.073 "method": "bdev_raid_add_base_bdev", 00:18:28.073 "req_id": 1 00:18:28.073 } 00:18:28.073 Got JSON-RPC error response 00:18:28.073 response: 00:18:28.073 { 00:18:28.073 "code": -22, 00:18:28.073 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:28.073 } 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:28.073 11:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:29.453 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.454 "name": "raid_bdev1", 00:18:29.454 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:29.454 "strip_size_kb": 0, 00:18:29.454 "state": "online", 00:18:29.454 "raid_level": "raid1", 00:18:29.454 "superblock": true, 00:18:29.454 "num_base_bdevs": 2, 00:18:29.454 "num_base_bdevs_discovered": 1, 00:18:29.454 "num_base_bdevs_operational": 1, 00:18:29.454 "base_bdevs_list": [ 00:18:29.454 { 00:18:29.454 "name": null, 00:18:29.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.454 "is_configured": false, 00:18:29.454 "data_offset": 0, 00:18:29.454 "data_size": 7936 00:18:29.454 }, 00:18:29.454 { 00:18:29.454 "name": "BaseBdev2", 00:18:29.454 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:29.454 "is_configured": true, 00:18:29.454 "data_offset": 256, 00:18:29.454 "data_size": 7936 00:18:29.454 } 00:18:29.454 ] 00:18:29.454 }' 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.454 11:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.714 "name": "raid_bdev1", 00:18:29.714 "uuid": "5dabd9c9-d9ea-41e9-bef1-b896a91d652e", 00:18:29.714 "strip_size_kb": 0, 00:18:29.714 "state": "online", 00:18:29.714 "raid_level": "raid1", 00:18:29.714 "superblock": true, 00:18:29.714 "num_base_bdevs": 2, 00:18:29.714 "num_base_bdevs_discovered": 1, 00:18:29.714 "num_base_bdevs_operational": 1, 00:18:29.714 "base_bdevs_list": [ 00:18:29.714 { 00:18:29.714 "name": null, 00:18:29.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.714 "is_configured": false, 00:18:29.714 "data_offset": 0, 00:18:29.714 "data_size": 7936 00:18:29.714 }, 00:18:29.714 { 00:18:29.714 "name": "BaseBdev2", 00:18:29.714 "uuid": "ac4ea7ee-e4d5-539e-bc81-066473558b57", 00:18:29.714 "is_configured": true, 00:18:29.714 "data_offset": 256, 00:18:29.714 "data_size": 7936 00:18:29.714 } 00:18:29.714 ] 00:18:29.714 }' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86741 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86741 ']' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86741 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86741 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:29.714 killing process with pid 86741 00:18:29.714 Received shutdown signal, test time was about 60.000000 seconds 00:18:29.714 00:18:29.714 Latency(us) 00:18:29.714 [2024-11-04T11:50:55.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.714 [2024-11-04T11:50:55.236Z] =================================================================================================================== 00:18:29.714 [2024-11-04T11:50:55.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86741' 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86741 00:18:29.714 [2024-11-04 11:50:55.229318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.714 [2024-11-04 11:50:55.229487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.714 11:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86741 00:18:29.714 [2024-11-04 11:50:55.229540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.714 [2024-11-04 11:50:55.229551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:30.283 [2024-11-04 11:50:55.533380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.222 ************************************ 00:18:31.222 END TEST raid_rebuild_test_sb_4k 00:18:31.223 ************************************ 00:18:31.223 11:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:31.223 00:18:31.223 real 0m20.001s 00:18:31.223 user 0m26.272s 00:18:31.223 sys 0m2.572s 00:18:31.223 11:50:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:31.223 11:50:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.223 11:50:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:31.223 11:50:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:31.223 11:50:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:31.223 11:50:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:31.223 11:50:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.223 ************************************ 00:18:31.223 START TEST raid_state_function_test_sb_md_separate 00:18:31.223 ************************************ 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87431 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87431' 00:18:31.223 Process raid pid: 87431 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87431 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87431 ']' 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.223 11:50:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.482 [2024-11-04 11:50:56.797815] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:31.482 [2024-11-04 11:50:56.797944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.482 [2024-11-04 11:50:56.976210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.741 [2024-11-04 11:50:57.099322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.001 [2024-11-04 11:50:57.306710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.001 [2024-11-04 11:50:57.306767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.260 [2024-11-04 11:50:57.644352] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.260 [2024-11-04 11:50:57.644423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.260 [2024-11-04 11:50:57.644435] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.260 [2024-11-04 11:50:57.644445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.260 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.261 "name": "Existed_Raid", 00:18:32.261 "uuid": "a0ce6732-4064-4585-b96f-bf5d6f88c3c9", 00:18:32.261 "strip_size_kb": 0, 00:18:32.261 "state": "configuring", 00:18:32.261 "raid_level": "raid1", 00:18:32.261 "superblock": true, 00:18:32.261 "num_base_bdevs": 2, 00:18:32.261 "num_base_bdevs_discovered": 0, 00:18:32.261 "num_base_bdevs_operational": 2, 00:18:32.261 "base_bdevs_list": [ 00:18:32.261 { 00:18:32.261 "name": "BaseBdev1", 00:18:32.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.261 "is_configured": false, 00:18:32.261 "data_offset": 0, 00:18:32.261 "data_size": 0 00:18:32.261 }, 00:18:32.261 { 00:18:32.261 "name": "BaseBdev2", 00:18:32.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.261 "is_configured": false, 00:18:32.261 "data_offset": 0, 00:18:32.261 "data_size": 0 00:18:32.261 } 00:18:32.261 ] 00:18:32.261 }' 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.261 11:50:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 [2024-11-04 11:50:58.123550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:32.830 [2024-11-04 11:50:58.123652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 [2024-11-04 11:50:58.135537] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:32.830 [2024-11-04 11:50:58.135622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:32.830 [2024-11-04 11:50:58.135656] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.830 [2024-11-04 11:50:58.135683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 [2024-11-04 11:50:58.189451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.830 BaseBdev1 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.830 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.830 [ 00:18:32.830 { 00:18:32.830 "name": "BaseBdev1", 00:18:32.830 "aliases": [ 00:18:32.830 "f2c22448-8caa-4ab4-8ec1-7a7919597055" 00:18:32.830 ], 00:18:32.830 "product_name": "Malloc disk", 00:18:32.830 "block_size": 4096, 00:18:32.830 "num_blocks": 8192, 00:18:32.830 "uuid": "f2c22448-8caa-4ab4-8ec1-7a7919597055", 00:18:32.830 "md_size": 32, 00:18:32.830 "md_interleave": false, 00:18:32.830 "dif_type": 0, 00:18:32.830 "assigned_rate_limits": { 00:18:32.830 "rw_ios_per_sec": 0, 00:18:32.830 "rw_mbytes_per_sec": 0, 00:18:32.830 "r_mbytes_per_sec": 0, 00:18:32.830 "w_mbytes_per_sec": 0 00:18:32.830 }, 00:18:32.830 "claimed": true, 00:18:32.830 "claim_type": "exclusive_write", 00:18:32.830 "zoned": false, 00:18:32.830 "supported_io_types": { 00:18:32.830 "read": true, 00:18:32.830 "write": true, 00:18:32.830 "unmap": true, 00:18:32.830 "flush": true, 00:18:32.830 "reset": true, 00:18:32.830 "nvme_admin": false, 00:18:32.830 "nvme_io": false, 00:18:32.830 "nvme_io_md": false, 00:18:32.830 "write_zeroes": true, 00:18:32.830 "zcopy": true, 00:18:32.830 "get_zone_info": false, 00:18:32.830 "zone_management": false, 00:18:32.830 "zone_append": false, 00:18:32.831 "compare": false, 00:18:32.831 "compare_and_write": false, 00:18:32.831 "abort": true, 00:18:32.831 "seek_hole": false, 00:18:32.831 "seek_data": false, 00:18:32.831 "copy": true, 00:18:32.831 "nvme_iov_md": false 00:18:32.831 }, 00:18:32.831 "memory_domains": [ 00:18:32.831 { 00:18:32.831 "dma_device_id": "system", 00:18:32.831 "dma_device_type": 1 00:18:32.831 }, 00:18:32.831 { 00:18:32.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.831 "dma_device_type": 2 00:18:32.831 } 00:18:32.831 ], 00:18:32.831 "driver_specific": {} 00:18:32.831 } 00:18:32.831 ] 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.831 "name": "Existed_Raid", 00:18:32.831 "uuid": "6c8e178a-5ae1-4b48-b0fa-4c33f637b7a5", 00:18:32.831 "strip_size_kb": 0, 00:18:32.831 "state": "configuring", 00:18:32.831 "raid_level": "raid1", 00:18:32.831 "superblock": true, 00:18:32.831 "num_base_bdevs": 2, 00:18:32.831 "num_base_bdevs_discovered": 1, 00:18:32.831 "num_base_bdevs_operational": 2, 00:18:32.831 "base_bdevs_list": [ 00:18:32.831 { 00:18:32.831 "name": "BaseBdev1", 00:18:32.831 "uuid": "f2c22448-8caa-4ab4-8ec1-7a7919597055", 00:18:32.831 "is_configured": true, 00:18:32.831 "data_offset": 256, 00:18:32.831 "data_size": 7936 00:18:32.831 }, 00:18:32.831 { 00:18:32.831 "name": "BaseBdev2", 00:18:32.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.831 "is_configured": false, 00:18:32.831 "data_offset": 0, 00:18:32.831 "data_size": 0 00:18:32.831 } 00:18:32.831 ] 00:18:32.831 }' 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.831 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.404 [2024-11-04 11:50:58.648754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.404 [2024-11-04 11:50:58.648817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.404 [2024-11-04 11:50:58.660768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.404 [2024-11-04 11:50:58.662682] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.404 [2024-11-04 11:50:58.662775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.404 "name": "Existed_Raid", 00:18:33.404 "uuid": "08d78645-e189-423e-a112-e86ca2fe184b", 00:18:33.404 "strip_size_kb": 0, 00:18:33.404 "state": "configuring", 00:18:33.404 "raid_level": "raid1", 00:18:33.404 "superblock": true, 00:18:33.404 "num_base_bdevs": 2, 00:18:33.404 "num_base_bdevs_discovered": 1, 00:18:33.404 "num_base_bdevs_operational": 2, 00:18:33.404 "base_bdevs_list": [ 00:18:33.404 { 00:18:33.404 "name": "BaseBdev1", 00:18:33.404 "uuid": "f2c22448-8caa-4ab4-8ec1-7a7919597055", 00:18:33.404 "is_configured": true, 00:18:33.404 "data_offset": 256, 00:18:33.404 "data_size": 7936 00:18:33.404 }, 00:18:33.404 { 00:18:33.404 "name": "BaseBdev2", 00:18:33.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.404 "is_configured": false, 00:18:33.404 "data_offset": 0, 00:18:33.404 "data_size": 0 00:18:33.404 } 00:18:33.404 ] 00:18:33.404 }' 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.404 11:50:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.663 [2024-11-04 11:50:59.161032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.663 [2024-11-04 11:50:59.161435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:33.663 [2024-11-04 11:50:59.161490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:33.663 [2024-11-04 11:50:59.161621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:33.663 [2024-11-04 11:50:59.161785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:33.663 [2024-11-04 11:50:59.161826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:33.663 [2024-11-04 11:50:59.161982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.663 BaseBdev2 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.663 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.923 [ 00:18:33.923 { 00:18:33.923 "name": "BaseBdev2", 00:18:33.923 "aliases": [ 00:18:33.923 "9ba1508a-4de1-45f7-a45f-d86216581404" 00:18:33.923 ], 00:18:33.923 "product_name": "Malloc disk", 00:18:33.923 "block_size": 4096, 00:18:33.923 "num_blocks": 8192, 00:18:33.923 "uuid": "9ba1508a-4de1-45f7-a45f-d86216581404", 00:18:33.923 "md_size": 32, 00:18:33.923 "md_interleave": false, 00:18:33.923 "dif_type": 0, 00:18:33.923 "assigned_rate_limits": { 00:18:33.923 "rw_ios_per_sec": 0, 00:18:33.923 "rw_mbytes_per_sec": 0, 00:18:33.923 "r_mbytes_per_sec": 0, 00:18:33.923 "w_mbytes_per_sec": 0 00:18:33.923 }, 00:18:33.923 "claimed": true, 00:18:33.923 "claim_type": "exclusive_write", 00:18:33.923 "zoned": false, 00:18:33.923 "supported_io_types": { 00:18:33.923 "read": true, 00:18:33.923 "write": true, 00:18:33.923 "unmap": true, 00:18:33.923 "flush": true, 00:18:33.923 "reset": true, 00:18:33.923 "nvme_admin": false, 00:18:33.923 "nvme_io": false, 00:18:33.923 "nvme_io_md": false, 00:18:33.923 "write_zeroes": true, 00:18:33.923 "zcopy": true, 00:18:33.923 "get_zone_info": false, 00:18:33.923 "zone_management": false, 00:18:33.923 "zone_append": false, 00:18:33.923 "compare": false, 00:18:33.923 "compare_and_write": false, 00:18:33.923 "abort": true, 00:18:33.923 "seek_hole": false, 00:18:33.923 "seek_data": false, 00:18:33.923 "copy": true, 00:18:33.923 "nvme_iov_md": false 00:18:33.923 }, 00:18:33.923 "memory_domains": [ 00:18:33.923 { 00:18:33.924 "dma_device_id": "system", 00:18:33.924 "dma_device_type": 1 00:18:33.924 }, 00:18:33.924 { 00:18:33.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.924 "dma_device_type": 2 00:18:33.924 } 00:18:33.924 ], 00:18:33.924 "driver_specific": {} 00:18:33.924 } 00:18:33.924 ] 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.924 "name": "Existed_Raid", 00:18:33.924 "uuid": "08d78645-e189-423e-a112-e86ca2fe184b", 00:18:33.924 "strip_size_kb": 0, 00:18:33.924 "state": "online", 00:18:33.924 "raid_level": "raid1", 00:18:33.924 "superblock": true, 00:18:33.924 "num_base_bdevs": 2, 00:18:33.924 "num_base_bdevs_discovered": 2, 00:18:33.924 "num_base_bdevs_operational": 2, 00:18:33.924 "base_bdevs_list": [ 00:18:33.924 { 00:18:33.924 "name": "BaseBdev1", 00:18:33.924 "uuid": "f2c22448-8caa-4ab4-8ec1-7a7919597055", 00:18:33.924 "is_configured": true, 00:18:33.924 "data_offset": 256, 00:18:33.924 "data_size": 7936 00:18:33.924 }, 00:18:33.924 { 00:18:33.924 "name": "BaseBdev2", 00:18:33.924 "uuid": "9ba1508a-4de1-45f7-a45f-d86216581404", 00:18:33.924 "is_configured": true, 00:18:33.924 "data_offset": 256, 00:18:33.924 "data_size": 7936 00:18:33.924 } 00:18:33.924 ] 00:18:33.924 }' 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.924 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.183 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:34.183 [2024-11-04 11:50:59.688507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.442 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.442 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:34.442 "name": "Existed_Raid", 00:18:34.442 "aliases": [ 00:18:34.442 "08d78645-e189-423e-a112-e86ca2fe184b" 00:18:34.442 ], 00:18:34.442 "product_name": "Raid Volume", 00:18:34.442 "block_size": 4096, 00:18:34.442 "num_blocks": 7936, 00:18:34.442 "uuid": "08d78645-e189-423e-a112-e86ca2fe184b", 00:18:34.442 "md_size": 32, 00:18:34.442 "md_interleave": false, 00:18:34.442 "dif_type": 0, 00:18:34.442 "assigned_rate_limits": { 00:18:34.442 "rw_ios_per_sec": 0, 00:18:34.442 "rw_mbytes_per_sec": 0, 00:18:34.442 "r_mbytes_per_sec": 0, 00:18:34.442 "w_mbytes_per_sec": 0 00:18:34.442 }, 00:18:34.442 "claimed": false, 00:18:34.442 "zoned": false, 00:18:34.442 "supported_io_types": { 00:18:34.442 "read": true, 00:18:34.442 "write": true, 00:18:34.442 "unmap": false, 00:18:34.442 "flush": false, 00:18:34.442 "reset": true, 00:18:34.442 "nvme_admin": false, 00:18:34.442 "nvme_io": false, 00:18:34.442 "nvme_io_md": false, 00:18:34.442 "write_zeroes": true, 00:18:34.442 "zcopy": false, 00:18:34.442 "get_zone_info": false, 00:18:34.442 "zone_management": false, 00:18:34.442 "zone_append": false, 00:18:34.442 "compare": false, 00:18:34.442 "compare_and_write": false, 00:18:34.442 "abort": false, 00:18:34.442 "seek_hole": false, 00:18:34.442 "seek_data": false, 00:18:34.442 "copy": false, 00:18:34.442 "nvme_iov_md": false 00:18:34.442 }, 00:18:34.442 "memory_domains": [ 00:18:34.442 { 00:18:34.442 "dma_device_id": "system", 00:18:34.442 "dma_device_type": 1 00:18:34.442 }, 00:18:34.442 { 00:18:34.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.442 "dma_device_type": 2 00:18:34.442 }, 00:18:34.442 { 00:18:34.442 "dma_device_id": "system", 00:18:34.442 "dma_device_type": 1 00:18:34.442 }, 00:18:34.442 { 00:18:34.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.442 "dma_device_type": 2 00:18:34.442 } 00:18:34.442 ], 00:18:34.442 "driver_specific": { 00:18:34.442 "raid": { 00:18:34.442 "uuid": "08d78645-e189-423e-a112-e86ca2fe184b", 00:18:34.442 "strip_size_kb": 0, 00:18:34.442 "state": "online", 00:18:34.442 "raid_level": "raid1", 00:18:34.442 "superblock": true, 00:18:34.442 "num_base_bdevs": 2, 00:18:34.442 "num_base_bdevs_discovered": 2, 00:18:34.442 "num_base_bdevs_operational": 2, 00:18:34.442 "base_bdevs_list": [ 00:18:34.442 { 00:18:34.442 "name": "BaseBdev1", 00:18:34.442 "uuid": "f2c22448-8caa-4ab4-8ec1-7a7919597055", 00:18:34.442 "is_configured": true, 00:18:34.442 "data_offset": 256, 00:18:34.442 "data_size": 7936 00:18:34.442 }, 00:18:34.443 { 00:18:34.443 "name": "BaseBdev2", 00:18:34.443 "uuid": "9ba1508a-4de1-45f7-a45f-d86216581404", 00:18:34.443 "is_configured": true, 00:18:34.443 "data_offset": 256, 00:18:34.443 "data_size": 7936 00:18:34.443 } 00:18:34.443 ] 00:18:34.443 } 00:18:34.443 } 00:18:34.443 }' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:34.443 BaseBdev2' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.443 11:50:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.443 [2024-11-04 11:50:59.919907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.702 "name": "Existed_Raid", 00:18:34.702 "uuid": "08d78645-e189-423e-a112-e86ca2fe184b", 00:18:34.702 "strip_size_kb": 0, 00:18:34.702 "state": "online", 00:18:34.702 "raid_level": "raid1", 00:18:34.702 "superblock": true, 00:18:34.702 "num_base_bdevs": 2, 00:18:34.702 "num_base_bdevs_discovered": 1, 00:18:34.702 "num_base_bdevs_operational": 1, 00:18:34.702 "base_bdevs_list": [ 00:18:34.702 { 00:18:34.702 "name": null, 00:18:34.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.702 "is_configured": false, 00:18:34.702 "data_offset": 0, 00:18:34.702 "data_size": 7936 00:18:34.702 }, 00:18:34.702 { 00:18:34.702 "name": "BaseBdev2", 00:18:34.702 "uuid": "9ba1508a-4de1-45f7-a45f-d86216581404", 00:18:34.702 "is_configured": true, 00:18:34.702 "data_offset": 256, 00:18:34.702 "data_size": 7936 00:18:34.702 } 00:18:34.702 ] 00:18:34.702 }' 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.702 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.272 [2024-11-04 11:51:00.549736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.272 [2024-11-04 11:51:00.549840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:35.272 [2024-11-04 11:51:00.660114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:35.272 [2024-11-04 11:51:00.660172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:35.272 [2024-11-04 11:51:00.660186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87431 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87431 ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87431 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87431 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87431' 00:18:35.272 killing process with pid 87431 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87431 00:18:35.272 [2024-11-04 11:51:00.748341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:35.272 11:51:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87431 00:18:35.272 [2024-11-04 11:51:00.765585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.653 11:51:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:36.653 00:18:36.653 real 0m5.198s 00:18:36.653 user 0m7.461s 00:18:36.653 sys 0m0.919s 00:18:36.653 ************************************ 00:18:36.653 END TEST raid_state_function_test_sb_md_separate 00:18:36.653 ************************************ 00:18:36.653 11:51:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.653 11:51:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.653 11:51:01 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:36.653 11:51:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:36.653 11:51:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.653 11:51:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.653 ************************************ 00:18:36.653 START TEST raid_superblock_test_md_separate 00:18:36.653 ************************************ 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:36.653 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87678 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87678 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87678 ']' 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.654 11:51:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.654 [2024-11-04 11:51:02.064907] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:36.654 [2024-11-04 11:51:02.065109] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87678 ] 00:18:36.913 [2024-11-04 11:51:02.226716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.913 [2024-11-04 11:51:02.343057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.171 [2024-11-04 11:51:02.549508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.171 [2024-11-04 11:51:02.549573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.740 11:51:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 malloc1 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 [2024-11-04 11:51:03.016893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.740 [2024-11-04 11:51:03.017023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.740 [2024-11-04 11:51:03.017107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:37.740 [2024-11-04 11:51:03.017148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.740 [2024-11-04 11:51:03.019235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.740 [2024-11-04 11:51:03.019307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.740 pt1 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 malloc2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 [2024-11-04 11:51:03.077929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.740 [2024-11-04 11:51:03.078063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.740 [2024-11-04 11:51:03.078140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:37.740 [2024-11-04 11:51:03.078190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.740 [2024-11-04 11:51:03.080517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.740 [2024-11-04 11:51:03.080599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.740 pt2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 [2024-11-04 11:51:03.089954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.740 [2024-11-04 11:51:03.092099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.740 [2024-11-04 11:51:03.092390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:37.740 [2024-11-04 11:51:03.092425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:37.740 [2024-11-04 11:51:03.092541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:37.740 [2024-11-04 11:51:03.092696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:37.740 [2024-11-04 11:51:03.092711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:37.740 [2024-11-04 11:51:03.092874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.740 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.741 "name": "raid_bdev1", 00:18:37.741 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:37.741 "strip_size_kb": 0, 00:18:37.741 "state": "online", 00:18:37.741 "raid_level": "raid1", 00:18:37.741 "superblock": true, 00:18:37.741 "num_base_bdevs": 2, 00:18:37.741 "num_base_bdevs_discovered": 2, 00:18:37.741 "num_base_bdevs_operational": 2, 00:18:37.741 "base_bdevs_list": [ 00:18:37.741 { 00:18:37.741 "name": "pt1", 00:18:37.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.741 "is_configured": true, 00:18:37.741 "data_offset": 256, 00:18:37.741 "data_size": 7936 00:18:37.741 }, 00:18:37.741 { 00:18:37.741 "name": "pt2", 00:18:37.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.741 "is_configured": true, 00:18:37.741 "data_offset": 256, 00:18:37.741 "data_size": 7936 00:18:37.741 } 00:18:37.741 ] 00:18:37.741 }' 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.741 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 [2024-11-04 11:51:03.545431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.318 "name": "raid_bdev1", 00:18:38.318 "aliases": [ 00:18:38.318 "40117a79-0cda-4c53-8b3b-93d9b38782fe" 00:18:38.318 ], 00:18:38.318 "product_name": "Raid Volume", 00:18:38.318 "block_size": 4096, 00:18:38.318 "num_blocks": 7936, 00:18:38.318 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:38.318 "md_size": 32, 00:18:38.318 "md_interleave": false, 00:18:38.318 "dif_type": 0, 00:18:38.318 "assigned_rate_limits": { 00:18:38.318 "rw_ios_per_sec": 0, 00:18:38.318 "rw_mbytes_per_sec": 0, 00:18:38.318 "r_mbytes_per_sec": 0, 00:18:38.318 "w_mbytes_per_sec": 0 00:18:38.318 }, 00:18:38.318 "claimed": false, 00:18:38.318 "zoned": false, 00:18:38.318 "supported_io_types": { 00:18:38.318 "read": true, 00:18:38.318 "write": true, 00:18:38.318 "unmap": false, 00:18:38.318 "flush": false, 00:18:38.318 "reset": true, 00:18:38.318 "nvme_admin": false, 00:18:38.318 "nvme_io": false, 00:18:38.318 "nvme_io_md": false, 00:18:38.318 "write_zeroes": true, 00:18:38.318 "zcopy": false, 00:18:38.318 "get_zone_info": false, 00:18:38.318 "zone_management": false, 00:18:38.318 "zone_append": false, 00:18:38.318 "compare": false, 00:18:38.318 "compare_and_write": false, 00:18:38.318 "abort": false, 00:18:38.318 "seek_hole": false, 00:18:38.318 "seek_data": false, 00:18:38.318 "copy": false, 00:18:38.318 "nvme_iov_md": false 00:18:38.318 }, 00:18:38.318 "memory_domains": [ 00:18:38.318 { 00:18:38.318 "dma_device_id": "system", 00:18:38.318 "dma_device_type": 1 00:18:38.318 }, 00:18:38.318 { 00:18:38.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.318 "dma_device_type": 2 00:18:38.318 }, 00:18:38.318 { 00:18:38.318 "dma_device_id": "system", 00:18:38.318 "dma_device_type": 1 00:18:38.318 }, 00:18:38.318 { 00:18:38.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.318 "dma_device_type": 2 00:18:38.318 } 00:18:38.318 ], 00:18:38.318 "driver_specific": { 00:18:38.318 "raid": { 00:18:38.318 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:38.318 "strip_size_kb": 0, 00:18:38.318 "state": "online", 00:18:38.318 "raid_level": "raid1", 00:18:38.318 "superblock": true, 00:18:38.318 "num_base_bdevs": 2, 00:18:38.318 "num_base_bdevs_discovered": 2, 00:18:38.318 "num_base_bdevs_operational": 2, 00:18:38.318 "base_bdevs_list": [ 00:18:38.318 { 00:18:38.318 "name": "pt1", 00:18:38.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.318 "is_configured": true, 00:18:38.318 "data_offset": 256, 00:18:38.318 "data_size": 7936 00:18:38.318 }, 00:18:38.318 { 00:18:38.318 "name": "pt2", 00:18:38.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.318 "is_configured": true, 00:18:38.318 "data_offset": 256, 00:18:38.318 "data_size": 7936 00:18:38.318 } 00:18:38.318 ] 00:18:38.318 } 00:18:38.318 } 00:18:38.318 }' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:38.318 pt2' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.318 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:38.319 [2024-11-04 11:51:03.801026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40117a79-0cda-4c53-8b3b-93d9b38782fe 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 40117a79-0cda-4c53-8b3b-93d9b38782fe ']' 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.319 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 [2024-11-04 11:51:03.836625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.319 [2024-11-04 11:51:03.836729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.319 [2024-11-04 11:51:03.836839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.319 [2024-11-04 11:51:03.836923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.319 [2024-11-04 11:51:03.836936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.578 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 [2024-11-04 11:51:03.960510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:38.579 [2024-11-04 11:51:03.962597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:38.579 [2024-11-04 11:51:03.962772] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:38.579 [2024-11-04 11:51:03.962841] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:38.579 [2024-11-04 11:51:03.962859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.579 [2024-11-04 11:51:03.962871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:38.579 request: 00:18:38.579 { 00:18:38.579 "name": "raid_bdev1", 00:18:38.579 "raid_level": "raid1", 00:18:38.579 "base_bdevs": [ 00:18:38.579 "malloc1", 00:18:38.579 "malloc2" 00:18:38.579 ], 00:18:38.579 "superblock": false, 00:18:38.579 "method": "bdev_raid_create", 00:18:38.579 "req_id": 1 00:18:38.579 } 00:18:38.579 Got JSON-RPC error response 00:18:38.579 response: 00:18:38.579 { 00:18:38.579 "code": -17, 00:18:38.579 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:38.579 } 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 11:51:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 [2024-11-04 11:51:04.016392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:38.579 [2024-11-04 11:51:04.016539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.579 [2024-11-04 11:51:04.016581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:38.579 [2024-11-04 11:51:04.016619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.579 [2024-11-04 11:51:04.018954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.579 [2024-11-04 11:51:04.019061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:38.579 [2024-11-04 11:51:04.019153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:38.579 [2024-11-04 11:51:04.019255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:38.579 pt1 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.579 "name": "raid_bdev1", 00:18:38.579 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:38.579 "strip_size_kb": 0, 00:18:38.579 "state": "configuring", 00:18:38.579 "raid_level": "raid1", 00:18:38.579 "superblock": true, 00:18:38.579 "num_base_bdevs": 2, 00:18:38.579 "num_base_bdevs_discovered": 1, 00:18:38.579 "num_base_bdevs_operational": 2, 00:18:38.579 "base_bdevs_list": [ 00:18:38.579 { 00:18:38.579 "name": "pt1", 00:18:38.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.579 "is_configured": true, 00:18:38.579 "data_offset": 256, 00:18:38.579 "data_size": 7936 00:18:38.579 }, 00:18:38.579 { 00:18:38.579 "name": null, 00:18:38.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.579 "is_configured": false, 00:18:38.579 "data_offset": 256, 00:18:38.579 "data_size": 7936 00:18:38.579 } 00:18:38.579 ] 00:18:38.579 }' 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.579 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 [2024-11-04 11:51:04.467601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.149 [2024-11-04 11:51:04.467688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.149 [2024-11-04 11:51:04.467710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:39.149 [2024-11-04 11:51:04.467722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.149 [2024-11-04 11:51:04.467961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.149 [2024-11-04 11:51:04.467977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.149 [2024-11-04 11:51:04.468034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:39.149 [2024-11-04 11:51:04.468081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:39.149 [2024-11-04 11:51:04.468204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:39.149 [2024-11-04 11:51:04.468216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:39.149 [2024-11-04 11:51:04.468294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:39.149 [2024-11-04 11:51:04.468520] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:39.149 [2024-11-04 11:51:04.468547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:39.149 [2024-11-04 11:51:04.468723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.149 pt2 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.149 "name": "raid_bdev1", 00:18:39.149 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:39.149 "strip_size_kb": 0, 00:18:39.149 "state": "online", 00:18:39.149 "raid_level": "raid1", 00:18:39.149 "superblock": true, 00:18:39.149 "num_base_bdevs": 2, 00:18:39.149 "num_base_bdevs_discovered": 2, 00:18:39.149 "num_base_bdevs_operational": 2, 00:18:39.149 "base_bdevs_list": [ 00:18:39.149 { 00:18:39.149 "name": "pt1", 00:18:39.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.149 "is_configured": true, 00:18:39.149 "data_offset": 256, 00:18:39.149 "data_size": 7936 00:18:39.149 }, 00:18:39.149 { 00:18:39.149 "name": "pt2", 00:18:39.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.149 "is_configured": true, 00:18:39.149 "data_offset": 256, 00:18:39.149 "data_size": 7936 00:18:39.149 } 00:18:39.149 ] 00:18:39.149 }' 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.149 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.409 [2024-11-04 11:51:04.911143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.409 11:51:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.668 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.668 "name": "raid_bdev1", 00:18:39.669 "aliases": [ 00:18:39.669 "40117a79-0cda-4c53-8b3b-93d9b38782fe" 00:18:39.669 ], 00:18:39.669 "product_name": "Raid Volume", 00:18:39.669 "block_size": 4096, 00:18:39.669 "num_blocks": 7936, 00:18:39.669 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:39.669 "md_size": 32, 00:18:39.669 "md_interleave": false, 00:18:39.669 "dif_type": 0, 00:18:39.669 "assigned_rate_limits": { 00:18:39.669 "rw_ios_per_sec": 0, 00:18:39.669 "rw_mbytes_per_sec": 0, 00:18:39.669 "r_mbytes_per_sec": 0, 00:18:39.669 "w_mbytes_per_sec": 0 00:18:39.669 }, 00:18:39.669 "claimed": false, 00:18:39.669 "zoned": false, 00:18:39.669 "supported_io_types": { 00:18:39.669 "read": true, 00:18:39.669 "write": true, 00:18:39.669 "unmap": false, 00:18:39.669 "flush": false, 00:18:39.669 "reset": true, 00:18:39.669 "nvme_admin": false, 00:18:39.669 "nvme_io": false, 00:18:39.669 "nvme_io_md": false, 00:18:39.669 "write_zeroes": true, 00:18:39.669 "zcopy": false, 00:18:39.669 "get_zone_info": false, 00:18:39.669 "zone_management": false, 00:18:39.669 "zone_append": false, 00:18:39.669 "compare": false, 00:18:39.669 "compare_and_write": false, 00:18:39.669 "abort": false, 00:18:39.669 "seek_hole": false, 00:18:39.669 "seek_data": false, 00:18:39.669 "copy": false, 00:18:39.669 "nvme_iov_md": false 00:18:39.669 }, 00:18:39.669 "memory_domains": [ 00:18:39.669 { 00:18:39.669 "dma_device_id": "system", 00:18:39.669 "dma_device_type": 1 00:18:39.669 }, 00:18:39.669 { 00:18:39.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.669 "dma_device_type": 2 00:18:39.669 }, 00:18:39.669 { 00:18:39.669 "dma_device_id": "system", 00:18:39.669 "dma_device_type": 1 00:18:39.669 }, 00:18:39.669 { 00:18:39.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.669 "dma_device_type": 2 00:18:39.669 } 00:18:39.669 ], 00:18:39.669 "driver_specific": { 00:18:39.669 "raid": { 00:18:39.669 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:39.669 "strip_size_kb": 0, 00:18:39.669 "state": "online", 00:18:39.669 "raid_level": "raid1", 00:18:39.669 "superblock": true, 00:18:39.669 "num_base_bdevs": 2, 00:18:39.669 "num_base_bdevs_discovered": 2, 00:18:39.669 "num_base_bdevs_operational": 2, 00:18:39.669 "base_bdevs_list": [ 00:18:39.669 { 00:18:39.669 "name": "pt1", 00:18:39.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.669 "is_configured": true, 00:18:39.669 "data_offset": 256, 00:18:39.669 "data_size": 7936 00:18:39.669 }, 00:18:39.669 { 00:18:39.669 "name": "pt2", 00:18:39.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.669 "is_configured": true, 00:18:39.669 "data_offset": 256, 00:18:39.669 "data_size": 7936 00:18:39.669 } 00:18:39.669 ] 00:18:39.669 } 00:18:39.669 } 00:18:39.669 }' 00:18:39.669 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.669 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.669 pt2' 00:18:39.669 11:51:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.669 [2024-11-04 11:51:05.142903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 40117a79-0cda-4c53-8b3b-93d9b38782fe '!=' 40117a79-0cda-4c53-8b3b-93d9b38782fe ']' 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.669 [2024-11-04 11:51:05.178483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.669 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.929 "name": "raid_bdev1", 00:18:39.929 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:39.929 "strip_size_kb": 0, 00:18:39.929 "state": "online", 00:18:39.929 "raid_level": "raid1", 00:18:39.929 "superblock": true, 00:18:39.929 "num_base_bdevs": 2, 00:18:39.929 "num_base_bdevs_discovered": 1, 00:18:39.929 "num_base_bdevs_operational": 1, 00:18:39.929 "base_bdevs_list": [ 00:18:39.929 { 00:18:39.929 "name": null, 00:18:39.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.929 "is_configured": false, 00:18:39.929 "data_offset": 0, 00:18:39.929 "data_size": 7936 00:18:39.929 }, 00:18:39.929 { 00:18:39.929 "name": "pt2", 00:18:39.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.929 "is_configured": true, 00:18:39.929 "data_offset": 256, 00:18:39.929 "data_size": 7936 00:18:39.929 } 00:18:39.929 ] 00:18:39.929 }' 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.929 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 [2024-11-04 11:51:05.609662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.189 [2024-11-04 11:51:05.609734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.189 [2024-11-04 11:51:05.609882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.189 [2024-11-04 11:51:05.609967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.189 [2024-11-04 11:51:05.610031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.189 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.189 [2024-11-04 11:51:05.665591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.189 [2024-11-04 11:51:05.665709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.190 [2024-11-04 11:51:05.665759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:40.190 [2024-11-04 11:51:05.665810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.190 [2024-11-04 11:51:05.667934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.190 [2024-11-04 11:51:05.668011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.190 [2024-11-04 11:51:05.668132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.190 [2024-11-04 11:51:05.668220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.190 [2024-11-04 11:51:05.668375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:40.190 [2024-11-04 11:51:05.668439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.190 [2024-11-04 11:51:05.668592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:40.190 [2024-11-04 11:51:05.668777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:40.190 [2024-11-04 11:51:05.668818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:40.190 [2024-11-04 11:51:05.669008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.190 pt2 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.190 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.449 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.449 "name": "raid_bdev1", 00:18:40.449 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:40.449 "strip_size_kb": 0, 00:18:40.449 "state": "online", 00:18:40.449 "raid_level": "raid1", 00:18:40.449 "superblock": true, 00:18:40.449 "num_base_bdevs": 2, 00:18:40.449 "num_base_bdevs_discovered": 1, 00:18:40.449 "num_base_bdevs_operational": 1, 00:18:40.449 "base_bdevs_list": [ 00:18:40.449 { 00:18:40.449 "name": null, 00:18:40.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.449 "is_configured": false, 00:18:40.449 "data_offset": 256, 00:18:40.449 "data_size": 7936 00:18:40.449 }, 00:18:40.449 { 00:18:40.449 "name": "pt2", 00:18:40.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.449 "is_configured": true, 00:18:40.449 "data_offset": 256, 00:18:40.449 "data_size": 7936 00:18:40.449 } 00:18:40.449 ] 00:18:40.449 }' 00:18:40.449 11:51:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.449 11:51:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 [2024-11-04 11:51:06.120796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.709 [2024-11-04 11:51:06.120895] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.709 [2024-11-04 11:51:06.121001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.709 [2024-11-04 11:51:06.121058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.709 [2024-11-04 11:51:06.121068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 [2024-11-04 11:51:06.176744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.709 [2024-11-04 11:51:06.176863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.709 [2024-11-04 11:51:06.176914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:40.709 [2024-11-04 11:51:06.176952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.709 [2024-11-04 11:51:06.179150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.709 [2024-11-04 11:51:06.179227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.709 [2024-11-04 11:51:06.179320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:40.709 [2024-11-04 11:51:06.179422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.709 [2024-11-04 11:51:06.179623] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:40.709 [2024-11-04 11:51:06.179679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.709 [2024-11-04 11:51:06.179753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:40.709 [2024-11-04 11:51:06.179889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.709 [2024-11-04 11:51:06.180017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:40.709 [2024-11-04 11:51:06.180066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:40.709 [2024-11-04 11:51:06.180188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.709 [2024-11-04 11:51:06.180344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:40.709 [2024-11-04 11:51:06.180386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:40.709 [2024-11-04 11:51:06.180582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.709 pt1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.709 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.968 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.968 "name": "raid_bdev1", 00:18:40.968 "uuid": "40117a79-0cda-4c53-8b3b-93d9b38782fe", 00:18:40.968 "strip_size_kb": 0, 00:18:40.968 "state": "online", 00:18:40.968 "raid_level": "raid1", 00:18:40.968 "superblock": true, 00:18:40.968 "num_base_bdevs": 2, 00:18:40.968 "num_base_bdevs_discovered": 1, 00:18:40.968 "num_base_bdevs_operational": 1, 00:18:40.968 "base_bdevs_list": [ 00:18:40.968 { 00:18:40.968 "name": null, 00:18:40.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.969 "is_configured": false, 00:18:40.969 "data_offset": 256, 00:18:40.969 "data_size": 7936 00:18:40.969 }, 00:18:40.969 { 00:18:40.969 "name": "pt2", 00:18:40.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.969 "is_configured": true, 00:18:40.969 "data_offset": 256, 00:18:40.969 "data_size": 7936 00:18:40.969 } 00:18:40.969 ] 00:18:40.969 }' 00:18:40.969 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.969 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.229 [2024-11-04 11:51:06.660180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 40117a79-0cda-4c53-8b3b-93d9b38782fe '!=' 40117a79-0cda-4c53-8b3b-93d9b38782fe ']' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87678 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87678 ']' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87678 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87678 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87678' 00:18:41.229 killing process with pid 87678 00:18:41.229 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87678 00:18:41.229 [2024-11-04 11:51:06.736180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.229 [2024-11-04 11:51:06.736274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.229 [2024-11-04 11:51:06.736323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.229 [2024-11-04 11:51:06.736340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 11:51:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87678 00:18:41.229 te offline 00:18:41.488 [2024-11-04 11:51:06.964623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.871 11:51:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:42.871 00:18:42.871 real 0m6.108s 00:18:42.871 user 0m9.246s 00:18:42.871 sys 0m1.083s 00:18:42.871 11:51:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:42.871 ************************************ 00:18:42.871 END TEST raid_superblock_test_md_separate 00:18:42.871 ************************************ 00:18:42.871 11:51:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 11:51:08 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:42.871 11:51:08 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:42.871 11:51:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:42.871 11:51:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:42.871 11:51:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 ************************************ 00:18:42.871 START TEST raid_rebuild_test_sb_md_separate 00:18:42.871 ************************************ 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88005 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88005 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88005 ']' 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.871 11:51:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 [2024-11-04 11:51:08.239857] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:18:42.871 [2024-11-04 11:51:08.240047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:42.871 Zero copy mechanism will not be used. 00:18:42.871 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88005 ] 00:18:43.169 [2024-11-04 11:51:08.416364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.169 [2024-11-04 11:51:08.527720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.430 [2024-11-04 11:51:08.725167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.430 [2024-11-04 11:51:08.725306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.688 BaseBdev1_malloc 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.688 [2024-11-04 11:51:09.124415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:43.688 [2024-11-04 11:51:09.124526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.688 [2024-11-04 11:51:09.124565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.688 [2024-11-04 11:51:09.124595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.688 [2024-11-04 11:51:09.126473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.688 [2024-11-04 11:51:09.126542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.688 BaseBdev1 00:18:43.688 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.689 BaseBdev2_malloc 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.689 [2024-11-04 11:51:09.179568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:43.689 [2024-11-04 11:51:09.179669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.689 [2024-11-04 11:51:09.179691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:43.689 [2024-11-04 11:51:09.179701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.689 [2024-11-04 11:51:09.181497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.689 [2024-11-04 11:51:09.181533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:43.689 BaseBdev2 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.689 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.947 spare_malloc 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.947 spare_delay 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.947 [2024-11-04 11:51:09.266552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.947 [2024-11-04 11:51:09.266609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.947 [2024-11-04 11:51:09.266630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:43.947 [2024-11-04 11:51:09.266641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.947 [2024-11-04 11:51:09.268609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.947 [2024-11-04 11:51:09.268648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.947 spare 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.947 [2024-11-04 11:51:09.278567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.947 [2024-11-04 11:51:09.280358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.947 [2024-11-04 11:51:09.280558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:43.947 [2024-11-04 11:51:09.280574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:43.947 [2024-11-04 11:51:09.280659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:43.947 [2024-11-04 11:51:09.280802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:43.947 [2024-11-04 11:51:09.280811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:43.947 [2024-11-04 11:51:09.280927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.947 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.948 "name": "raid_bdev1", 00:18:43.948 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:43.948 "strip_size_kb": 0, 00:18:43.948 "state": "online", 00:18:43.948 "raid_level": "raid1", 00:18:43.948 "superblock": true, 00:18:43.948 "num_base_bdevs": 2, 00:18:43.948 "num_base_bdevs_discovered": 2, 00:18:43.948 "num_base_bdevs_operational": 2, 00:18:43.948 "base_bdevs_list": [ 00:18:43.948 { 00:18:43.948 "name": "BaseBdev1", 00:18:43.948 "uuid": "47ed5b02-fe12-58e5-9cca-6e0480b744c4", 00:18:43.948 "is_configured": true, 00:18:43.948 "data_offset": 256, 00:18:43.948 "data_size": 7936 00:18:43.948 }, 00:18:43.948 { 00:18:43.948 "name": "BaseBdev2", 00:18:43.948 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:43.948 "is_configured": true, 00:18:43.948 "data_offset": 256, 00:18:43.948 "data_size": 7936 00:18:43.948 } 00:18:43.948 ] 00:18:43.948 }' 00:18:43.948 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.948 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.514 [2024-11-04 11:51:09.742075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.514 11:51:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:44.514 [2024-11-04 11:51:10.001455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:44.514 /dev/nbd0 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.773 1+0 records in 00:18:44.773 1+0 records out 00:18:44.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618721 s, 6.6 MB/s 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:44.773 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:45.339 7936+0 records in 00:18:45.340 7936+0 records out 00:18:45.340 32505856 bytes (33 MB, 31 MiB) copied, 0.634821 s, 51.2 MB/s 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.340 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.598 [2024-11-04 11:51:10.946487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.598 [2024-11-04 11:51:10.970562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.598 11:51:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.598 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.598 "name": "raid_bdev1", 00:18:45.598 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:45.598 "strip_size_kb": 0, 00:18:45.598 "state": "online", 00:18:45.598 "raid_level": "raid1", 00:18:45.598 "superblock": true, 00:18:45.598 "num_base_bdevs": 2, 00:18:45.598 "num_base_bdevs_discovered": 1, 00:18:45.598 "num_base_bdevs_operational": 1, 00:18:45.598 "base_bdevs_list": [ 00:18:45.598 { 00:18:45.598 "name": null, 00:18:45.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.598 "is_configured": false, 00:18:45.598 "data_offset": 0, 00:18:45.598 "data_size": 7936 00:18:45.598 }, 00:18:45.598 { 00:18:45.598 "name": "BaseBdev2", 00:18:45.598 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:45.598 "is_configured": true, 00:18:45.598 "data_offset": 256, 00:18:45.598 "data_size": 7936 00:18:45.598 } 00:18:45.598 ] 00:18:45.598 }' 00:18:45.598 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.598 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.165 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.165 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.165 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.165 [2024-11-04 11:51:11.453737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.165 [2024-11-04 11:51:11.469979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:46.165 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.165 11:51:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:46.165 [2024-11-04 11:51:11.471931] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.100 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.100 "name": "raid_bdev1", 00:18:47.100 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:47.100 "strip_size_kb": 0, 00:18:47.100 "state": "online", 00:18:47.100 "raid_level": "raid1", 00:18:47.100 "superblock": true, 00:18:47.100 "num_base_bdevs": 2, 00:18:47.100 "num_base_bdevs_discovered": 2, 00:18:47.100 "num_base_bdevs_operational": 2, 00:18:47.100 "process": { 00:18:47.100 "type": "rebuild", 00:18:47.100 "target": "spare", 00:18:47.100 "progress": { 00:18:47.100 "blocks": 2560, 00:18:47.100 "percent": 32 00:18:47.100 } 00:18:47.100 }, 00:18:47.100 "base_bdevs_list": [ 00:18:47.100 { 00:18:47.100 "name": "spare", 00:18:47.100 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:47.100 "is_configured": true, 00:18:47.100 "data_offset": 256, 00:18:47.100 "data_size": 7936 00:18:47.100 }, 00:18:47.100 { 00:18:47.100 "name": "BaseBdev2", 00:18:47.100 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:47.100 "is_configured": true, 00:18:47.100 "data_offset": 256, 00:18:47.101 "data_size": 7936 00:18:47.101 } 00:18:47.101 ] 00:18:47.101 }' 00:18:47.101 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.101 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.101 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.359 [2024-11-04 11:51:12.635661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.359 [2024-11-04 11:51:12.677456] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.359 [2024-11-04 11:51:12.677522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.359 [2024-11-04 11:51:12.677537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.359 [2024-11-04 11:51:12.677546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.359 "name": "raid_bdev1", 00:18:47.359 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:47.359 "strip_size_kb": 0, 00:18:47.359 "state": "online", 00:18:47.359 "raid_level": "raid1", 00:18:47.359 "superblock": true, 00:18:47.359 "num_base_bdevs": 2, 00:18:47.359 "num_base_bdevs_discovered": 1, 00:18:47.359 "num_base_bdevs_operational": 1, 00:18:47.359 "base_bdevs_list": [ 00:18:47.359 { 00:18:47.359 "name": null, 00:18:47.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.359 "is_configured": false, 00:18:47.359 "data_offset": 0, 00:18:47.359 "data_size": 7936 00:18:47.359 }, 00:18:47.359 { 00:18:47.359 "name": "BaseBdev2", 00:18:47.359 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:47.359 "is_configured": true, 00:18:47.359 "data_offset": 256, 00:18:47.359 "data_size": 7936 00:18:47.359 } 00:18:47.359 ] 00:18:47.359 }' 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.359 11:51:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.926 "name": "raid_bdev1", 00:18:47.926 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:47.926 "strip_size_kb": 0, 00:18:47.926 "state": "online", 00:18:47.926 "raid_level": "raid1", 00:18:47.926 "superblock": true, 00:18:47.926 "num_base_bdevs": 2, 00:18:47.926 "num_base_bdevs_discovered": 1, 00:18:47.926 "num_base_bdevs_operational": 1, 00:18:47.926 "base_bdevs_list": [ 00:18:47.926 { 00:18:47.926 "name": null, 00:18:47.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.926 "is_configured": false, 00:18:47.926 "data_offset": 0, 00:18:47.926 "data_size": 7936 00:18:47.926 }, 00:18:47.926 { 00:18:47.926 "name": "BaseBdev2", 00:18:47.926 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:47.926 "is_configured": true, 00:18:47.926 "data_offset": 256, 00:18:47.926 "data_size": 7936 00:18:47.926 } 00:18:47.926 ] 00:18:47.926 }' 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.926 [2024-11-04 11:51:13.325271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.926 [2024-11-04 11:51:13.341865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.926 11:51:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:47.926 [2024-11-04 11:51:13.344001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.864 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.124 "name": "raid_bdev1", 00:18:49.124 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:49.124 "strip_size_kb": 0, 00:18:49.124 "state": "online", 00:18:49.124 "raid_level": "raid1", 00:18:49.124 "superblock": true, 00:18:49.124 "num_base_bdevs": 2, 00:18:49.124 "num_base_bdevs_discovered": 2, 00:18:49.124 "num_base_bdevs_operational": 2, 00:18:49.124 "process": { 00:18:49.124 "type": "rebuild", 00:18:49.124 "target": "spare", 00:18:49.124 "progress": { 00:18:49.124 "blocks": 2560, 00:18:49.124 "percent": 32 00:18:49.124 } 00:18:49.124 }, 00:18:49.124 "base_bdevs_list": [ 00:18:49.124 { 00:18:49.124 "name": "spare", 00:18:49.124 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:49.124 "is_configured": true, 00:18:49.124 "data_offset": 256, 00:18:49.124 "data_size": 7936 00:18:49.124 }, 00:18:49.124 { 00:18:49.124 "name": "BaseBdev2", 00:18:49.124 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:49.124 "is_configured": true, 00:18:49.124 "data_offset": 256, 00:18:49.124 "data_size": 7936 00:18:49.124 } 00:18:49.124 ] 00:18:49.124 }' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:49.124 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=716 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.124 "name": "raid_bdev1", 00:18:49.124 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:49.124 "strip_size_kb": 0, 00:18:49.124 "state": "online", 00:18:49.124 "raid_level": "raid1", 00:18:49.124 "superblock": true, 00:18:49.124 "num_base_bdevs": 2, 00:18:49.124 "num_base_bdevs_discovered": 2, 00:18:49.124 "num_base_bdevs_operational": 2, 00:18:49.124 "process": { 00:18:49.124 "type": "rebuild", 00:18:49.124 "target": "spare", 00:18:49.124 "progress": { 00:18:49.124 "blocks": 2816, 00:18:49.124 "percent": 35 00:18:49.124 } 00:18:49.124 }, 00:18:49.124 "base_bdevs_list": [ 00:18:49.124 { 00:18:49.124 "name": "spare", 00:18:49.124 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:49.124 "is_configured": true, 00:18:49.124 "data_offset": 256, 00:18:49.124 "data_size": 7936 00:18:49.124 }, 00:18:49.124 { 00:18:49.124 "name": "BaseBdev2", 00:18:49.124 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:49.124 "is_configured": true, 00:18:49.124 "data_offset": 256, 00:18:49.124 "data_size": 7936 00:18:49.124 } 00:18:49.124 ] 00:18:49.124 }' 00:18:49.124 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.125 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.125 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.125 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.125 11:51:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.507 "name": "raid_bdev1", 00:18:50.507 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:50.507 "strip_size_kb": 0, 00:18:50.507 "state": "online", 00:18:50.507 "raid_level": "raid1", 00:18:50.507 "superblock": true, 00:18:50.507 "num_base_bdevs": 2, 00:18:50.507 "num_base_bdevs_discovered": 2, 00:18:50.507 "num_base_bdevs_operational": 2, 00:18:50.507 "process": { 00:18:50.507 "type": "rebuild", 00:18:50.507 "target": "spare", 00:18:50.507 "progress": { 00:18:50.507 "blocks": 5632, 00:18:50.507 "percent": 70 00:18:50.507 } 00:18:50.507 }, 00:18:50.507 "base_bdevs_list": [ 00:18:50.507 { 00:18:50.507 "name": "spare", 00:18:50.507 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:50.507 "is_configured": true, 00:18:50.507 "data_offset": 256, 00:18:50.507 "data_size": 7936 00:18:50.507 }, 00:18:50.507 { 00:18:50.507 "name": "BaseBdev2", 00:18:50.507 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:50.507 "is_configured": true, 00:18:50.507 "data_offset": 256, 00:18:50.507 "data_size": 7936 00:18:50.507 } 00:18:50.507 ] 00:18:50.507 }' 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.507 11:51:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.077 [2024-11-04 11:51:16.458926] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:51.077 [2024-11-04 11:51:16.459081] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:51.077 [2024-11-04 11:51:16.459278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.337 "name": "raid_bdev1", 00:18:51.337 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:51.337 "strip_size_kb": 0, 00:18:51.337 "state": "online", 00:18:51.337 "raid_level": "raid1", 00:18:51.337 "superblock": true, 00:18:51.337 "num_base_bdevs": 2, 00:18:51.337 "num_base_bdevs_discovered": 2, 00:18:51.337 "num_base_bdevs_operational": 2, 00:18:51.337 "base_bdevs_list": [ 00:18:51.337 { 00:18:51.337 "name": "spare", 00:18:51.337 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:51.337 "is_configured": true, 00:18:51.337 "data_offset": 256, 00:18:51.337 "data_size": 7936 00:18:51.337 }, 00:18:51.337 { 00:18:51.337 "name": "BaseBdev2", 00:18:51.337 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:51.337 "is_configured": true, 00:18:51.337 "data_offset": 256, 00:18:51.337 "data_size": 7936 00:18:51.337 } 00:18:51.337 ] 00:18:51.337 }' 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:51.337 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.597 "name": "raid_bdev1", 00:18:51.597 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:51.597 "strip_size_kb": 0, 00:18:51.597 "state": "online", 00:18:51.597 "raid_level": "raid1", 00:18:51.597 "superblock": true, 00:18:51.597 "num_base_bdevs": 2, 00:18:51.597 "num_base_bdevs_discovered": 2, 00:18:51.597 "num_base_bdevs_operational": 2, 00:18:51.597 "base_bdevs_list": [ 00:18:51.597 { 00:18:51.597 "name": "spare", 00:18:51.597 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:51.597 "is_configured": true, 00:18:51.597 "data_offset": 256, 00:18:51.597 "data_size": 7936 00:18:51.597 }, 00:18:51.597 { 00:18:51.597 "name": "BaseBdev2", 00:18:51.597 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:51.597 "is_configured": true, 00:18:51.597 "data_offset": 256, 00:18:51.597 "data_size": 7936 00:18:51.597 } 00:18:51.597 ] 00:18:51.597 }' 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.597 11:51:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.597 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.598 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.598 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.598 "name": "raid_bdev1", 00:18:51.598 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:51.598 "strip_size_kb": 0, 00:18:51.598 "state": "online", 00:18:51.598 "raid_level": "raid1", 00:18:51.598 "superblock": true, 00:18:51.598 "num_base_bdevs": 2, 00:18:51.598 "num_base_bdevs_discovered": 2, 00:18:51.598 "num_base_bdevs_operational": 2, 00:18:51.598 "base_bdevs_list": [ 00:18:51.598 { 00:18:51.598 "name": "spare", 00:18:51.598 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:51.598 "is_configured": true, 00:18:51.598 "data_offset": 256, 00:18:51.598 "data_size": 7936 00:18:51.598 }, 00:18:51.598 { 00:18:51.598 "name": "BaseBdev2", 00:18:51.598 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:51.598 "is_configured": true, 00:18:51.598 "data_offset": 256, 00:18:51.598 "data_size": 7936 00:18:51.598 } 00:18:51.598 ] 00:18:51.598 }' 00:18:51.598 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.598 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.167 [2024-11-04 11:51:17.496304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.167 [2024-11-04 11:51:17.496390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.167 [2024-11-04 11:51:17.496554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.167 [2024-11-04 11:51:17.496674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.167 [2024-11-04 11:51:17.496730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.167 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:52.427 /dev/nbd0 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.427 1+0 records in 00:18:52.427 1+0 records out 00:18:52.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381827 s, 10.7 MB/s 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.427 11:51:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:52.686 /dev/nbd1 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:52.686 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.687 1+0 records in 00:18:52.687 1+0 records out 00:18:52.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402022 s, 10.2 MB/s 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:52.687 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:52.945 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:52.945 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.945 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:52.945 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:52.946 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:52.946 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.946 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.205 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:53.465 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 [2024-11-04 11:51:18.830201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:53.466 [2024-11-04 11:51:18.830322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.466 [2024-11-04 11:51:18.830363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:53.466 [2024-11-04 11:51:18.830373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.466 [2024-11-04 11:51:18.832465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.466 [2024-11-04 11:51:18.832502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:53.466 [2024-11-04 11:51:18.832571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:53.466 [2024-11-04 11:51:18.832633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.466 [2024-11-04 11:51:18.832771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.466 spare 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 [2024-11-04 11:51:18.932667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:53.466 [2024-11-04 11:51:18.932715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:53.466 [2024-11-04 11:51:18.932851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:53.466 [2024-11-04 11:51:18.933038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:53.466 [2024-11-04 11:51:18.933047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:53.466 [2024-11-04 11:51:18.933189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.726 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.726 "name": "raid_bdev1", 00:18:53.726 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:53.726 "strip_size_kb": 0, 00:18:53.726 "state": "online", 00:18:53.726 "raid_level": "raid1", 00:18:53.726 "superblock": true, 00:18:53.726 "num_base_bdevs": 2, 00:18:53.726 "num_base_bdevs_discovered": 2, 00:18:53.726 "num_base_bdevs_operational": 2, 00:18:53.726 "base_bdevs_list": [ 00:18:53.726 { 00:18:53.726 "name": "spare", 00:18:53.726 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:53.726 "is_configured": true, 00:18:53.726 "data_offset": 256, 00:18:53.726 "data_size": 7936 00:18:53.726 }, 00:18:53.726 { 00:18:53.726 "name": "BaseBdev2", 00:18:53.726 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:53.726 "is_configured": true, 00:18:53.726 "data_offset": 256, 00:18:53.726 "data_size": 7936 00:18:53.726 } 00:18:53.726 ] 00:18:53.726 }' 00:18:53.726 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.726 11:51:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.985 "name": "raid_bdev1", 00:18:53.985 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:53.985 "strip_size_kb": 0, 00:18:53.985 "state": "online", 00:18:53.985 "raid_level": "raid1", 00:18:53.985 "superblock": true, 00:18:53.985 "num_base_bdevs": 2, 00:18:53.985 "num_base_bdevs_discovered": 2, 00:18:53.985 "num_base_bdevs_operational": 2, 00:18:53.985 "base_bdevs_list": [ 00:18:53.985 { 00:18:53.985 "name": "spare", 00:18:53.985 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:53.985 "is_configured": true, 00:18:53.985 "data_offset": 256, 00:18:53.985 "data_size": 7936 00:18:53.985 }, 00:18:53.985 { 00:18:53.985 "name": "BaseBdev2", 00:18:53.985 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:53.985 "is_configured": true, 00:18:53.985 "data_offset": 256, 00:18:53.985 "data_size": 7936 00:18:53.985 } 00:18:53.985 ] 00:18:53.985 }' 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.985 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.245 [2024-11-04 11:51:19.561098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.245 "name": "raid_bdev1", 00:18:54.245 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:54.245 "strip_size_kb": 0, 00:18:54.245 "state": "online", 00:18:54.245 "raid_level": "raid1", 00:18:54.245 "superblock": true, 00:18:54.245 "num_base_bdevs": 2, 00:18:54.245 "num_base_bdevs_discovered": 1, 00:18:54.245 "num_base_bdevs_operational": 1, 00:18:54.245 "base_bdevs_list": [ 00:18:54.245 { 00:18:54.245 "name": null, 00:18:54.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.245 "is_configured": false, 00:18:54.245 "data_offset": 0, 00:18:54.245 "data_size": 7936 00:18:54.245 }, 00:18:54.245 { 00:18:54.245 "name": "BaseBdev2", 00:18:54.245 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:54.245 "is_configured": true, 00:18:54.245 "data_offset": 256, 00:18:54.245 "data_size": 7936 00:18:54.245 } 00:18:54.245 ] 00:18:54.245 }' 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.245 11:51:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 11:51:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.813 11:51:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.813 11:51:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.813 [2024-11-04 11:51:20.072276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.813 [2024-11-04 11:51:20.072601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.813 [2024-11-04 11:51:20.072677] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:54.813 [2024-11-04 11:51:20.072775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.813 [2024-11-04 11:51:20.089611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:54.813 11:51:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.813 11:51:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:54.813 [2024-11-04 11:51:20.091928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.751 "name": "raid_bdev1", 00:18:55.751 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:55.751 "strip_size_kb": 0, 00:18:55.751 "state": "online", 00:18:55.751 "raid_level": "raid1", 00:18:55.751 "superblock": true, 00:18:55.751 "num_base_bdevs": 2, 00:18:55.751 "num_base_bdevs_discovered": 2, 00:18:55.751 "num_base_bdevs_operational": 2, 00:18:55.751 "process": { 00:18:55.751 "type": "rebuild", 00:18:55.751 "target": "spare", 00:18:55.751 "progress": { 00:18:55.751 "blocks": 2560, 00:18:55.751 "percent": 32 00:18:55.751 } 00:18:55.751 }, 00:18:55.751 "base_bdevs_list": [ 00:18:55.751 { 00:18:55.751 "name": "spare", 00:18:55.751 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:55.751 "is_configured": true, 00:18:55.751 "data_offset": 256, 00:18:55.751 "data_size": 7936 00:18:55.751 }, 00:18:55.751 { 00:18:55.751 "name": "BaseBdev2", 00:18:55.751 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:55.751 "is_configured": true, 00:18:55.751 "data_offset": 256, 00:18:55.751 "data_size": 7936 00:18:55.751 } 00:18:55.751 ] 00:18:55.751 }' 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.751 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.751 [2024-11-04 11:51:21.260291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.011 [2024-11-04 11:51:21.297938] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.011 [2024-11-04 11:51:21.298020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.011 [2024-11-04 11:51:21.298036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.011 [2024-11-04 11:51:21.298062] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.011 "name": "raid_bdev1", 00:18:56.011 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:56.011 "strip_size_kb": 0, 00:18:56.011 "state": "online", 00:18:56.011 "raid_level": "raid1", 00:18:56.011 "superblock": true, 00:18:56.011 "num_base_bdevs": 2, 00:18:56.011 "num_base_bdevs_discovered": 1, 00:18:56.011 "num_base_bdevs_operational": 1, 00:18:56.011 "base_bdevs_list": [ 00:18:56.011 { 00:18:56.011 "name": null, 00:18:56.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.011 "is_configured": false, 00:18:56.011 "data_offset": 0, 00:18:56.011 "data_size": 7936 00:18:56.011 }, 00:18:56.011 { 00:18:56.011 "name": "BaseBdev2", 00:18:56.011 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:56.011 "is_configured": true, 00:18:56.011 "data_offset": 256, 00:18:56.011 "data_size": 7936 00:18:56.011 } 00:18:56.011 ] 00:18:56.011 }' 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.011 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.269 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.269 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.269 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:56.269 [2024-11-04 11:51:21.770084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.269 [2024-11-04 11:51:21.770223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.269 [2024-11-04 11:51:21.770269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:56.269 [2024-11-04 11:51:21.770303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.269 [2024-11-04 11:51:21.770677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.269 [2024-11-04 11:51:21.770744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.270 [2024-11-04 11:51:21.770863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.270 [2024-11-04 11:51:21.770911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:56.270 [2024-11-04 11:51:21.770967] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:56.270 [2024-11-04 11:51:21.771043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.270 [2024-11-04 11:51:21.786979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:56.270 spare 00:18:56.270 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.270 11:51:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:56.270 [2024-11-04 11:51:21.789082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.649 "name": "raid_bdev1", 00:18:57.649 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:57.649 "strip_size_kb": 0, 00:18:57.649 "state": "online", 00:18:57.649 "raid_level": "raid1", 00:18:57.649 "superblock": true, 00:18:57.649 "num_base_bdevs": 2, 00:18:57.649 "num_base_bdevs_discovered": 2, 00:18:57.649 "num_base_bdevs_operational": 2, 00:18:57.649 "process": { 00:18:57.649 "type": "rebuild", 00:18:57.649 "target": "spare", 00:18:57.649 "progress": { 00:18:57.649 "blocks": 2560, 00:18:57.649 "percent": 32 00:18:57.649 } 00:18:57.649 }, 00:18:57.649 "base_bdevs_list": [ 00:18:57.649 { 00:18:57.649 "name": "spare", 00:18:57.649 "uuid": "753ddf01-f300-5197-8910-3215a7ffe513", 00:18:57.649 "is_configured": true, 00:18:57.649 "data_offset": 256, 00:18:57.649 "data_size": 7936 00:18:57.649 }, 00:18:57.649 { 00:18:57.649 "name": "BaseBdev2", 00:18:57.649 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:57.649 "is_configured": true, 00:18:57.649 "data_offset": 256, 00:18:57.649 "data_size": 7936 00:18:57.649 } 00:18:57.649 ] 00:18:57.649 }' 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.649 11:51:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.649 [2024-11-04 11:51:22.957333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.649 [2024-11-04 11:51:22.994987] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:57.649 [2024-11-04 11:51:22.995073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.649 [2024-11-04 11:51:22.995091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.649 [2024-11-04 11:51:22.995098] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.649 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.650 "name": "raid_bdev1", 00:18:57.650 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:57.650 "strip_size_kb": 0, 00:18:57.650 "state": "online", 00:18:57.650 "raid_level": "raid1", 00:18:57.650 "superblock": true, 00:18:57.650 "num_base_bdevs": 2, 00:18:57.650 "num_base_bdevs_discovered": 1, 00:18:57.650 "num_base_bdevs_operational": 1, 00:18:57.650 "base_bdevs_list": [ 00:18:57.650 { 00:18:57.650 "name": null, 00:18:57.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.650 "is_configured": false, 00:18:57.650 "data_offset": 0, 00:18:57.650 "data_size": 7936 00:18:57.650 }, 00:18:57.650 { 00:18:57.650 "name": "BaseBdev2", 00:18:57.650 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:57.650 "is_configured": true, 00:18:57.650 "data_offset": 256, 00:18:57.650 "data_size": 7936 00:18:57.650 } 00:18:57.650 ] 00:18:57.650 }' 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.650 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.218 "name": "raid_bdev1", 00:18:58.218 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:58.218 "strip_size_kb": 0, 00:18:58.218 "state": "online", 00:18:58.218 "raid_level": "raid1", 00:18:58.218 "superblock": true, 00:18:58.218 "num_base_bdevs": 2, 00:18:58.218 "num_base_bdevs_discovered": 1, 00:18:58.218 "num_base_bdevs_operational": 1, 00:18:58.218 "base_bdevs_list": [ 00:18:58.218 { 00:18:58.218 "name": null, 00:18:58.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.218 "is_configured": false, 00:18:58.218 "data_offset": 0, 00:18:58.218 "data_size": 7936 00:18:58.218 }, 00:18:58.218 { 00:18:58.218 "name": "BaseBdev2", 00:18:58.218 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:58.218 "is_configured": true, 00:18:58.218 "data_offset": 256, 00:18:58.218 "data_size": 7936 00:18:58.218 } 00:18:58.218 ] 00:18:58.218 }' 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.218 [2024-11-04 11:51:23.646505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:58.218 [2024-11-04 11:51:23.646564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.218 [2024-11-04 11:51:23.646592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:58.218 [2024-11-04 11:51:23.646602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.218 [2024-11-04 11:51:23.646839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.218 [2024-11-04 11:51:23.646850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:58.218 [2024-11-04 11:51:23.646904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:58.218 [2024-11-04 11:51:23.646916] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.218 [2024-11-04 11:51:23.646926] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:58.218 [2024-11-04 11:51:23.646936] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:58.218 BaseBdev1 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.218 11:51:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.157 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.420 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.420 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.420 "name": "raid_bdev1", 00:18:59.420 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:59.420 "strip_size_kb": 0, 00:18:59.420 "state": "online", 00:18:59.420 "raid_level": "raid1", 00:18:59.420 "superblock": true, 00:18:59.420 "num_base_bdevs": 2, 00:18:59.420 "num_base_bdevs_discovered": 1, 00:18:59.420 "num_base_bdevs_operational": 1, 00:18:59.420 "base_bdevs_list": [ 00:18:59.420 { 00:18:59.420 "name": null, 00:18:59.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.420 "is_configured": false, 00:18:59.420 "data_offset": 0, 00:18:59.420 "data_size": 7936 00:18:59.420 }, 00:18:59.420 { 00:18:59.420 "name": "BaseBdev2", 00:18:59.420 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:59.420 "is_configured": true, 00:18:59.420 "data_offset": 256, 00:18:59.420 "data_size": 7936 00:18:59.420 } 00:18:59.420 ] 00:18:59.420 }' 00:18:59.420 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.420 11:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.686 "name": "raid_bdev1", 00:18:59.686 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:18:59.686 "strip_size_kb": 0, 00:18:59.686 "state": "online", 00:18:59.686 "raid_level": "raid1", 00:18:59.686 "superblock": true, 00:18:59.686 "num_base_bdevs": 2, 00:18:59.686 "num_base_bdevs_discovered": 1, 00:18:59.686 "num_base_bdevs_operational": 1, 00:18:59.686 "base_bdevs_list": [ 00:18:59.686 { 00:18:59.686 "name": null, 00:18:59.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.686 "is_configured": false, 00:18:59.686 "data_offset": 0, 00:18:59.686 "data_size": 7936 00:18:59.686 }, 00:18:59.686 { 00:18:59.686 "name": "BaseBdev2", 00:18:59.686 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:18:59.686 "is_configured": true, 00:18:59.686 "data_offset": 256, 00:18:59.686 "data_size": 7936 00:18:59.686 } 00:18:59.686 ] 00:18:59.686 }' 00:18:59.686 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.953 [2024-11-04 11:51:25.268235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.953 [2024-11-04 11:51:25.268431] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.953 [2024-11-04 11:51:25.268449] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:59.953 request: 00:18:59.953 { 00:18:59.953 "base_bdev": "BaseBdev1", 00:18:59.953 "raid_bdev": "raid_bdev1", 00:18:59.953 "method": "bdev_raid_add_base_bdev", 00:18:59.953 "req_id": 1 00:18:59.953 } 00:18:59.953 Got JSON-RPC error response 00:18:59.953 response: 00:18:59.953 { 00:18:59.953 "code": -22, 00:18:59.953 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:59.953 } 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.953 11:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.892 "name": "raid_bdev1", 00:19:00.892 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:19:00.892 "strip_size_kb": 0, 00:19:00.892 "state": "online", 00:19:00.892 "raid_level": "raid1", 00:19:00.892 "superblock": true, 00:19:00.892 "num_base_bdevs": 2, 00:19:00.892 "num_base_bdevs_discovered": 1, 00:19:00.892 "num_base_bdevs_operational": 1, 00:19:00.892 "base_bdevs_list": [ 00:19:00.892 { 00:19:00.892 "name": null, 00:19:00.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.892 "is_configured": false, 00:19:00.892 "data_offset": 0, 00:19:00.892 "data_size": 7936 00:19:00.892 }, 00:19:00.892 { 00:19:00.892 "name": "BaseBdev2", 00:19:00.892 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:19:00.892 "is_configured": true, 00:19:00.892 "data_offset": 256, 00:19:00.892 "data_size": 7936 00:19:00.892 } 00:19:00.892 ] 00:19:00.892 }' 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.892 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.462 "name": "raid_bdev1", 00:19:01.462 "uuid": "e1642680-8377-4efa-9147-72b732bc7568", 00:19:01.462 "strip_size_kb": 0, 00:19:01.462 "state": "online", 00:19:01.462 "raid_level": "raid1", 00:19:01.462 "superblock": true, 00:19:01.462 "num_base_bdevs": 2, 00:19:01.462 "num_base_bdevs_discovered": 1, 00:19:01.462 "num_base_bdevs_operational": 1, 00:19:01.462 "base_bdevs_list": [ 00:19:01.462 { 00:19:01.462 "name": null, 00:19:01.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.462 "is_configured": false, 00:19:01.462 "data_offset": 0, 00:19:01.462 "data_size": 7936 00:19:01.462 }, 00:19:01.462 { 00:19:01.462 "name": "BaseBdev2", 00:19:01.462 "uuid": "b8441575-7dab-5173-8767-d50ab2ba2d5d", 00:19:01.462 "is_configured": true, 00:19:01.462 "data_offset": 256, 00:19:01.462 "data_size": 7936 00:19:01.462 } 00:19:01.462 ] 00:19:01.462 }' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88005 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88005 ']' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88005 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88005 00:19:01.462 killing process with pid 88005 00:19:01.462 Received shutdown signal, test time was about 60.000000 seconds 00:19:01.462 00:19:01.462 Latency(us) 00:19:01.462 [2024-11-04T11:51:26.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.462 [2024-11-04T11:51:26.984Z] =================================================================================================================== 00:19:01.462 [2024-11-04T11:51:26.984Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88005' 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88005 00:19:01.462 11:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88005 00:19:01.462 [2024-11-04 11:51:26.912781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.462 [2024-11-04 11:51:26.912919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.462 [2024-11-04 11:51:26.912998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.462 [2024-11-04 11:51:26.913013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:02.031 [2024-11-04 11:51:27.296695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.410 11:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:03.410 00:19:03.410 real 0m20.414s 00:19:03.410 user 0m26.745s 00:19:03.410 sys 0m2.700s 00:19:03.410 ************************************ 00:19:03.410 END TEST raid_rebuild_test_sb_md_separate 00:19:03.410 ************************************ 00:19:03.410 11:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:03.410 11:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.410 11:51:28 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:03.410 11:51:28 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:03.410 11:51:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:03.410 11:51:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:03.410 11:51:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.410 ************************************ 00:19:03.410 START TEST raid_state_function_test_sb_md_interleaved 00:19:03.410 ************************************ 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:03.410 Process raid pid: 88698 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88698 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88698' 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88698 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88698 ']' 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:03.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:03.410 11:51:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.410 [2024-11-04 11:51:28.723653] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:03.410 [2024-11-04 11:51:28.723885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.410 [2024-11-04 11:51:28.905146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.669 [2024-11-04 11:51:29.039757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.928 [2024-11-04 11:51:29.285821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.928 [2024-11-04 11:51:29.285962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.188 [2024-11-04 11:51:29.663743] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.188 [2024-11-04 11:51:29.663814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.188 [2024-11-04 11:51:29.663826] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.188 [2024-11-04 11:51:29.663837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.188 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.445 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.445 "name": "Existed_Raid", 00:19:04.445 "uuid": "6ca8e29f-2ecb-4a2c-9d18-a3221bf18182", 00:19:04.445 "strip_size_kb": 0, 00:19:04.445 "state": "configuring", 00:19:04.445 "raid_level": "raid1", 00:19:04.445 "superblock": true, 00:19:04.445 "num_base_bdevs": 2, 00:19:04.445 "num_base_bdevs_discovered": 0, 00:19:04.445 "num_base_bdevs_operational": 2, 00:19:04.445 "base_bdevs_list": [ 00:19:04.445 { 00:19:04.445 "name": "BaseBdev1", 00:19:04.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.445 "is_configured": false, 00:19:04.445 "data_offset": 0, 00:19:04.445 "data_size": 0 00:19:04.445 }, 00:19:04.445 { 00:19:04.445 "name": "BaseBdev2", 00:19:04.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.445 "is_configured": false, 00:19:04.445 "data_offset": 0, 00:19:04.445 "data_size": 0 00:19:04.445 } 00:19:04.445 ] 00:19:04.445 }' 00:19:04.445 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.445 11:51:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 [2024-11-04 11:51:30.126888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.704 [2024-11-04 11:51:30.126990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 [2024-11-04 11:51:30.138872] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.704 [2024-11-04 11:51:30.138923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.704 [2024-11-04 11:51:30.138935] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.704 [2024-11-04 11:51:30.138949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 [2024-11-04 11:51:30.193241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.704 BaseBdev1 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.704 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.704 [ 00:19:04.704 { 00:19:04.704 "name": "BaseBdev1", 00:19:04.704 "aliases": [ 00:19:04.704 "1c475098-8ab7-4d91-9574-494db2a4185c" 00:19:04.704 ], 00:19:04.704 "product_name": "Malloc disk", 00:19:04.704 "block_size": 4128, 00:19:04.704 "num_blocks": 8192, 00:19:04.704 "uuid": "1c475098-8ab7-4d91-9574-494db2a4185c", 00:19:04.704 "md_size": 32, 00:19:04.704 "md_interleave": true, 00:19:04.704 "dif_type": 0, 00:19:04.704 "assigned_rate_limits": { 00:19:04.704 "rw_ios_per_sec": 0, 00:19:04.704 "rw_mbytes_per_sec": 0, 00:19:04.704 "r_mbytes_per_sec": 0, 00:19:04.704 "w_mbytes_per_sec": 0 00:19:04.704 }, 00:19:04.704 "claimed": true, 00:19:04.985 "claim_type": "exclusive_write", 00:19:04.985 "zoned": false, 00:19:04.985 "supported_io_types": { 00:19:04.985 "read": true, 00:19:04.985 "write": true, 00:19:04.985 "unmap": true, 00:19:04.985 "flush": true, 00:19:04.985 "reset": true, 00:19:04.985 "nvme_admin": false, 00:19:04.985 "nvme_io": false, 00:19:04.985 "nvme_io_md": false, 00:19:04.985 "write_zeroes": true, 00:19:04.985 "zcopy": true, 00:19:04.985 "get_zone_info": false, 00:19:04.985 "zone_management": false, 00:19:04.985 "zone_append": false, 00:19:04.985 "compare": false, 00:19:04.985 "compare_and_write": false, 00:19:04.985 "abort": true, 00:19:04.985 "seek_hole": false, 00:19:04.985 "seek_data": false, 00:19:04.985 "copy": true, 00:19:04.985 "nvme_iov_md": false 00:19:04.985 }, 00:19:04.985 "memory_domains": [ 00:19:04.985 { 00:19:04.985 "dma_device_id": "system", 00:19:04.985 "dma_device_type": 1 00:19:04.985 }, 00:19:04.985 { 00:19:04.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.985 "dma_device_type": 2 00:19:04.985 } 00:19:04.985 ], 00:19:04.985 "driver_specific": {} 00:19:04.985 } 00:19:04.985 ] 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.986 "name": "Existed_Raid", 00:19:04.986 "uuid": "9c1a7782-5ca1-4009-a277-c2e2ad2351f9", 00:19:04.986 "strip_size_kb": 0, 00:19:04.986 "state": "configuring", 00:19:04.986 "raid_level": "raid1", 00:19:04.986 "superblock": true, 00:19:04.986 "num_base_bdevs": 2, 00:19:04.986 "num_base_bdevs_discovered": 1, 00:19:04.986 "num_base_bdevs_operational": 2, 00:19:04.986 "base_bdevs_list": [ 00:19:04.986 { 00:19:04.986 "name": "BaseBdev1", 00:19:04.986 "uuid": "1c475098-8ab7-4d91-9574-494db2a4185c", 00:19:04.986 "is_configured": true, 00:19:04.986 "data_offset": 256, 00:19:04.986 "data_size": 7936 00:19:04.986 }, 00:19:04.986 { 00:19:04.986 "name": "BaseBdev2", 00:19:04.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.986 "is_configured": false, 00:19:04.986 "data_offset": 0, 00:19:04.986 "data_size": 0 00:19:04.986 } 00:19:04.986 ] 00:19:04.986 }' 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.986 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.259 [2024-11-04 11:51:30.688513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:05.259 [2024-11-04 11:51:30.688645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.259 [2024-11-04 11:51:30.700573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.259 [2024-11-04 11:51:30.702471] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.259 [2024-11-04 11:51:30.702516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.259 "name": "Existed_Raid", 00:19:05.259 "uuid": "2fdd6618-00f0-4c2a-a649-451155ed172e", 00:19:05.259 "strip_size_kb": 0, 00:19:05.259 "state": "configuring", 00:19:05.259 "raid_level": "raid1", 00:19:05.259 "superblock": true, 00:19:05.259 "num_base_bdevs": 2, 00:19:05.259 "num_base_bdevs_discovered": 1, 00:19:05.259 "num_base_bdevs_operational": 2, 00:19:05.259 "base_bdevs_list": [ 00:19:05.259 { 00:19:05.259 "name": "BaseBdev1", 00:19:05.259 "uuid": "1c475098-8ab7-4d91-9574-494db2a4185c", 00:19:05.259 "is_configured": true, 00:19:05.259 "data_offset": 256, 00:19:05.259 "data_size": 7936 00:19:05.259 }, 00:19:05.259 { 00:19:05.259 "name": "BaseBdev2", 00:19:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.259 "is_configured": false, 00:19:05.259 "data_offset": 0, 00:19:05.259 "data_size": 0 00:19:05.259 } 00:19:05.259 ] 00:19:05.259 }' 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.259 11:51:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.827 [2024-11-04 11:51:31.158575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.827 [2024-11-04 11:51:31.158926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:05.827 [2024-11-04 11:51:31.158977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:05.827 [2024-11-04 11:51:31.159112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:05.827 [2024-11-04 11:51:31.159228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:05.827 [2024-11-04 11:51:31.159266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:05.827 [2024-11-04 11:51:31.159387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.827 BaseBdev2 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.827 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.827 [ 00:19:05.827 { 00:19:05.827 "name": "BaseBdev2", 00:19:05.827 "aliases": [ 00:19:05.827 "756f3f59-8986-4499-82ac-cf131af8a71f" 00:19:05.827 ], 00:19:05.827 "product_name": "Malloc disk", 00:19:05.827 "block_size": 4128, 00:19:05.827 "num_blocks": 8192, 00:19:05.827 "uuid": "756f3f59-8986-4499-82ac-cf131af8a71f", 00:19:05.827 "md_size": 32, 00:19:05.827 "md_interleave": true, 00:19:05.827 "dif_type": 0, 00:19:05.827 "assigned_rate_limits": { 00:19:05.827 "rw_ios_per_sec": 0, 00:19:05.827 "rw_mbytes_per_sec": 0, 00:19:05.827 "r_mbytes_per_sec": 0, 00:19:05.827 "w_mbytes_per_sec": 0 00:19:05.827 }, 00:19:05.827 "claimed": true, 00:19:05.827 "claim_type": "exclusive_write", 00:19:05.827 "zoned": false, 00:19:05.827 "supported_io_types": { 00:19:05.827 "read": true, 00:19:05.827 "write": true, 00:19:05.827 "unmap": true, 00:19:05.827 "flush": true, 00:19:05.827 "reset": true, 00:19:05.827 "nvme_admin": false, 00:19:05.827 "nvme_io": false, 00:19:05.827 "nvme_io_md": false, 00:19:05.827 "write_zeroes": true, 00:19:05.827 "zcopy": true, 00:19:05.827 "get_zone_info": false, 00:19:05.827 "zone_management": false, 00:19:05.827 "zone_append": false, 00:19:05.827 "compare": false, 00:19:05.827 "compare_and_write": false, 00:19:05.827 "abort": true, 00:19:05.827 "seek_hole": false, 00:19:05.827 "seek_data": false, 00:19:05.827 "copy": true, 00:19:05.827 "nvme_iov_md": false 00:19:05.827 }, 00:19:05.827 "memory_domains": [ 00:19:05.827 { 00:19:05.827 "dma_device_id": "system", 00:19:05.827 "dma_device_type": 1 00:19:05.828 }, 00:19:05.828 { 00:19:05.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.828 "dma_device_type": 2 00:19:05.828 } 00:19:05.828 ], 00:19:05.828 "driver_specific": {} 00:19:05.828 } 00:19:05.828 ] 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.828 "name": "Existed_Raid", 00:19:05.828 "uuid": "2fdd6618-00f0-4c2a-a649-451155ed172e", 00:19:05.828 "strip_size_kb": 0, 00:19:05.828 "state": "online", 00:19:05.828 "raid_level": "raid1", 00:19:05.828 "superblock": true, 00:19:05.828 "num_base_bdevs": 2, 00:19:05.828 "num_base_bdevs_discovered": 2, 00:19:05.828 "num_base_bdevs_operational": 2, 00:19:05.828 "base_bdevs_list": [ 00:19:05.828 { 00:19:05.828 "name": "BaseBdev1", 00:19:05.828 "uuid": "1c475098-8ab7-4d91-9574-494db2a4185c", 00:19:05.828 "is_configured": true, 00:19:05.828 "data_offset": 256, 00:19:05.828 "data_size": 7936 00:19:05.828 }, 00:19:05.828 { 00:19:05.828 "name": "BaseBdev2", 00:19:05.828 "uuid": "756f3f59-8986-4499-82ac-cf131af8a71f", 00:19:05.828 "is_configured": true, 00:19:05.828 "data_offset": 256, 00:19:05.828 "data_size": 7936 00:19:05.828 } 00:19:05.828 ] 00:19:05.828 }' 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.828 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.394 [2024-11-04 11:51:31.658139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.394 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.394 "name": "Existed_Raid", 00:19:06.394 "aliases": [ 00:19:06.394 "2fdd6618-00f0-4c2a-a649-451155ed172e" 00:19:06.394 ], 00:19:06.394 "product_name": "Raid Volume", 00:19:06.394 "block_size": 4128, 00:19:06.394 "num_blocks": 7936, 00:19:06.394 "uuid": "2fdd6618-00f0-4c2a-a649-451155ed172e", 00:19:06.394 "md_size": 32, 00:19:06.394 "md_interleave": true, 00:19:06.394 "dif_type": 0, 00:19:06.394 "assigned_rate_limits": { 00:19:06.394 "rw_ios_per_sec": 0, 00:19:06.394 "rw_mbytes_per_sec": 0, 00:19:06.394 "r_mbytes_per_sec": 0, 00:19:06.395 "w_mbytes_per_sec": 0 00:19:06.395 }, 00:19:06.395 "claimed": false, 00:19:06.395 "zoned": false, 00:19:06.395 "supported_io_types": { 00:19:06.395 "read": true, 00:19:06.395 "write": true, 00:19:06.395 "unmap": false, 00:19:06.395 "flush": false, 00:19:06.395 "reset": true, 00:19:06.395 "nvme_admin": false, 00:19:06.395 "nvme_io": false, 00:19:06.395 "nvme_io_md": false, 00:19:06.395 "write_zeroes": true, 00:19:06.395 "zcopy": false, 00:19:06.395 "get_zone_info": false, 00:19:06.395 "zone_management": false, 00:19:06.395 "zone_append": false, 00:19:06.395 "compare": false, 00:19:06.395 "compare_and_write": false, 00:19:06.395 "abort": false, 00:19:06.395 "seek_hole": false, 00:19:06.395 "seek_data": false, 00:19:06.395 "copy": false, 00:19:06.395 "nvme_iov_md": false 00:19:06.395 }, 00:19:06.395 "memory_domains": [ 00:19:06.395 { 00:19:06.395 "dma_device_id": "system", 00:19:06.395 "dma_device_type": 1 00:19:06.395 }, 00:19:06.395 { 00:19:06.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.395 "dma_device_type": 2 00:19:06.395 }, 00:19:06.395 { 00:19:06.395 "dma_device_id": "system", 00:19:06.395 "dma_device_type": 1 00:19:06.395 }, 00:19:06.395 { 00:19:06.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.395 "dma_device_type": 2 00:19:06.395 } 00:19:06.395 ], 00:19:06.395 "driver_specific": { 00:19:06.395 "raid": { 00:19:06.395 "uuid": "2fdd6618-00f0-4c2a-a649-451155ed172e", 00:19:06.395 "strip_size_kb": 0, 00:19:06.395 "state": "online", 00:19:06.395 "raid_level": "raid1", 00:19:06.395 "superblock": true, 00:19:06.395 "num_base_bdevs": 2, 00:19:06.395 "num_base_bdevs_discovered": 2, 00:19:06.395 "num_base_bdevs_operational": 2, 00:19:06.395 "base_bdevs_list": [ 00:19:06.395 { 00:19:06.395 "name": "BaseBdev1", 00:19:06.395 "uuid": "1c475098-8ab7-4d91-9574-494db2a4185c", 00:19:06.395 "is_configured": true, 00:19:06.395 "data_offset": 256, 00:19:06.395 "data_size": 7936 00:19:06.395 }, 00:19:06.395 { 00:19:06.395 "name": "BaseBdev2", 00:19:06.395 "uuid": "756f3f59-8986-4499-82ac-cf131af8a71f", 00:19:06.395 "is_configured": true, 00:19:06.395 "data_offset": 256, 00:19:06.395 "data_size": 7936 00:19:06.395 } 00:19:06.395 ] 00:19:06.395 } 00:19:06.395 } 00:19:06.395 }' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:06.395 BaseBdev2' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.395 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.395 [2024-11-04 11:51:31.881560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.654 11:51:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.654 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.654 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.654 "name": "Existed_Raid", 00:19:06.654 "uuid": "2fdd6618-00f0-4c2a-a649-451155ed172e", 00:19:06.654 "strip_size_kb": 0, 00:19:06.654 "state": "online", 00:19:06.654 "raid_level": "raid1", 00:19:06.654 "superblock": true, 00:19:06.654 "num_base_bdevs": 2, 00:19:06.654 "num_base_bdevs_discovered": 1, 00:19:06.654 "num_base_bdevs_operational": 1, 00:19:06.654 "base_bdevs_list": [ 00:19:06.654 { 00:19:06.654 "name": null, 00:19:06.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.654 "is_configured": false, 00:19:06.654 "data_offset": 0, 00:19:06.654 "data_size": 7936 00:19:06.654 }, 00:19:06.654 { 00:19:06.654 "name": "BaseBdev2", 00:19:06.654 "uuid": "756f3f59-8986-4499-82ac-cf131af8a71f", 00:19:06.654 "is_configured": true, 00:19:06.654 "data_offset": 256, 00:19:06.654 "data_size": 7936 00:19:06.654 } 00:19:06.654 ] 00:19:06.654 }' 00:19:06.654 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.654 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.219 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:07.219 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.219 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:07.219 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.220 [2024-11-04 11:51:32.508992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.220 [2024-11-04 11:51:32.509106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.220 [2024-11-04 11:51:32.610452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.220 [2024-11-04 11:51:32.610559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.220 [2024-11-04 11:51:32.610621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88698 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88698 ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88698 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88698 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88698' 00:19:07.220 killing process with pid 88698 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88698 00:19:07.220 [2024-11-04 11:51:32.708045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.220 11:51:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88698 00:19:07.220 [2024-11-04 11:51:32.727365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.595 11:51:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:08.595 00:19:08.595 real 0m5.276s 00:19:08.595 user 0m7.599s 00:19:08.595 sys 0m0.894s 00:19:08.595 11:51:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:08.595 ************************************ 00:19:08.595 END TEST raid_state_function_test_sb_md_interleaved 00:19:08.595 ************************************ 00:19:08.595 11:51:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.595 11:51:33 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:08.595 11:51:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:08.595 11:51:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:08.595 11:51:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.595 ************************************ 00:19:08.595 START TEST raid_superblock_test_md_interleaved 00:19:08.595 ************************************ 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88956 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88956 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88956 ']' 00:19:08.595 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.596 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:08.596 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.596 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:08.596 11:51:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.596 [2024-11-04 11:51:34.067022] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:08.596 [2024-11-04 11:51:34.067263] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88956 ] 00:19:08.859 [2024-11-04 11:51:34.244060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.859 [2024-11-04 11:51:34.363083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.125 [2024-11-04 11:51:34.571648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.125 [2024-11-04 11:51:34.571811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.693 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 malloc1 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 [2024-11-04 11:51:34.979962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.694 [2024-11-04 11:51:34.980088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.694 [2024-11-04 11:51:34.980135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.694 [2024-11-04 11:51:34.980171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.694 [2024-11-04 11:51:34.982184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.694 [2024-11-04 11:51:34.982262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.694 pt1 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 malloc2 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 [2024-11-04 11:51:35.040819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.694 [2024-11-04 11:51:35.040926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.694 [2024-11-04 11:51:35.041001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.694 [2024-11-04 11:51:35.041034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.694 [2024-11-04 11:51:35.042847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.694 [2024-11-04 11:51:35.042884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.694 pt2 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 [2024-11-04 11:51:35.052854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:09.694 [2024-11-04 11:51:35.054880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:09.694 [2024-11-04 11:51:35.055099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.694 [2024-11-04 11:51:35.055113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:09.694 [2024-11-04 11:51:35.055193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.694 [2024-11-04 11:51:35.055264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.694 [2024-11-04 11:51:35.055276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.694 [2024-11-04 11:51:35.055353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.694 "name": "raid_bdev1", 00:19:09.694 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:09.694 "strip_size_kb": 0, 00:19:09.694 "state": "online", 00:19:09.694 "raid_level": "raid1", 00:19:09.694 "superblock": true, 00:19:09.694 "num_base_bdevs": 2, 00:19:09.694 "num_base_bdevs_discovered": 2, 00:19:09.694 "num_base_bdevs_operational": 2, 00:19:09.694 "base_bdevs_list": [ 00:19:09.694 { 00:19:09.694 "name": "pt1", 00:19:09.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:09.694 "is_configured": true, 00:19:09.694 "data_offset": 256, 00:19:09.694 "data_size": 7936 00:19:09.694 }, 00:19:09.694 { 00:19:09.694 "name": "pt2", 00:19:09.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:09.694 "is_configured": true, 00:19:09.694 "data_offset": 256, 00:19:09.694 "data_size": 7936 00:19:09.694 } 00:19:09.694 ] 00:19:09.694 }' 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.694 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.263 [2024-11-04 11:51:35.516451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.263 "name": "raid_bdev1", 00:19:10.263 "aliases": [ 00:19:10.263 "694f02cf-5be9-478c-9efe-beadd72fe386" 00:19:10.263 ], 00:19:10.263 "product_name": "Raid Volume", 00:19:10.263 "block_size": 4128, 00:19:10.263 "num_blocks": 7936, 00:19:10.263 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:10.263 "md_size": 32, 00:19:10.263 "md_interleave": true, 00:19:10.263 "dif_type": 0, 00:19:10.263 "assigned_rate_limits": { 00:19:10.263 "rw_ios_per_sec": 0, 00:19:10.263 "rw_mbytes_per_sec": 0, 00:19:10.263 "r_mbytes_per_sec": 0, 00:19:10.263 "w_mbytes_per_sec": 0 00:19:10.263 }, 00:19:10.263 "claimed": false, 00:19:10.263 "zoned": false, 00:19:10.263 "supported_io_types": { 00:19:10.263 "read": true, 00:19:10.263 "write": true, 00:19:10.263 "unmap": false, 00:19:10.263 "flush": false, 00:19:10.263 "reset": true, 00:19:10.263 "nvme_admin": false, 00:19:10.263 "nvme_io": false, 00:19:10.263 "nvme_io_md": false, 00:19:10.263 "write_zeroes": true, 00:19:10.263 "zcopy": false, 00:19:10.263 "get_zone_info": false, 00:19:10.263 "zone_management": false, 00:19:10.263 "zone_append": false, 00:19:10.263 "compare": false, 00:19:10.263 "compare_and_write": false, 00:19:10.263 "abort": false, 00:19:10.263 "seek_hole": false, 00:19:10.263 "seek_data": false, 00:19:10.263 "copy": false, 00:19:10.263 "nvme_iov_md": false 00:19:10.263 }, 00:19:10.263 "memory_domains": [ 00:19:10.263 { 00:19:10.263 "dma_device_id": "system", 00:19:10.263 "dma_device_type": 1 00:19:10.263 }, 00:19:10.263 { 00:19:10.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.263 "dma_device_type": 2 00:19:10.263 }, 00:19:10.263 { 00:19:10.263 "dma_device_id": "system", 00:19:10.263 "dma_device_type": 1 00:19:10.263 }, 00:19:10.263 { 00:19:10.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.263 "dma_device_type": 2 00:19:10.263 } 00:19:10.263 ], 00:19:10.263 "driver_specific": { 00:19:10.263 "raid": { 00:19:10.263 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:10.263 "strip_size_kb": 0, 00:19:10.263 "state": "online", 00:19:10.263 "raid_level": "raid1", 00:19:10.263 "superblock": true, 00:19:10.263 "num_base_bdevs": 2, 00:19:10.263 "num_base_bdevs_discovered": 2, 00:19:10.263 "num_base_bdevs_operational": 2, 00:19:10.263 "base_bdevs_list": [ 00:19:10.263 { 00:19:10.263 "name": "pt1", 00:19:10.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.263 "is_configured": true, 00:19:10.263 "data_offset": 256, 00:19:10.263 "data_size": 7936 00:19:10.263 }, 00:19:10.263 { 00:19:10.263 "name": "pt2", 00:19:10.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.263 "is_configured": true, 00:19:10.263 "data_offset": 256, 00:19:10.263 "data_size": 7936 00:19:10.263 } 00:19:10.263 ] 00:19:10.263 } 00:19:10.263 } 00:19:10.263 }' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:10.263 pt2' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:10.263 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.264 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:10.264 [2024-11-04 11:51:35.768109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=694f02cf-5be9-478c-9efe-beadd72fe386 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 694f02cf-5be9-478c-9efe-beadd72fe386 ']' 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.523 [2024-11-04 11:51:35.815595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.523 [2024-11-04 11:51:35.815626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.523 [2024-11-04 11:51:35.815731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.523 [2024-11-04 11:51:35.815796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.523 [2024-11-04 11:51:35.815809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.523 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 [2024-11-04 11:51:35.955466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:10.524 [2024-11-04 11:51:35.958089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:10.524 [2024-11-04 11:51:35.958250] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:10.524 [2024-11-04 11:51:35.958376] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:10.524 [2024-11-04 11:51:35.958453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.524 [2024-11-04 11:51:35.958516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:10.524 request: 00:19:10.524 { 00:19:10.524 "name": "raid_bdev1", 00:19:10.524 "raid_level": "raid1", 00:19:10.524 "base_bdevs": [ 00:19:10.524 "malloc1", 00:19:10.524 "malloc2" 00:19:10.524 ], 00:19:10.524 "superblock": false, 00:19:10.524 "method": "bdev_raid_create", 00:19:10.524 "req_id": 1 00:19:10.524 } 00:19:10.524 Got JSON-RPC error response 00:19:10.524 response: 00:19:10.524 { 00:19:10.524 "code": -17, 00:19:10.524 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:10.524 } 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 11:51:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 [2024-11-04 11:51:36.019267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:10.524 [2024-11-04 11:51:36.019361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.524 [2024-11-04 11:51:36.019385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:10.524 [2024-11-04 11:51:36.019409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.524 [2024-11-04 11:51:36.021682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.524 [2024-11-04 11:51:36.021727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:10.524 [2024-11-04 11:51:36.021806] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:10.524 [2024-11-04 11:51:36.021881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.524 pt1 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.524 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.783 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.783 "name": "raid_bdev1", 00:19:10.783 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:10.783 "strip_size_kb": 0, 00:19:10.783 "state": "configuring", 00:19:10.783 "raid_level": "raid1", 00:19:10.783 "superblock": true, 00:19:10.783 "num_base_bdevs": 2, 00:19:10.783 "num_base_bdevs_discovered": 1, 00:19:10.783 "num_base_bdevs_operational": 2, 00:19:10.783 "base_bdevs_list": [ 00:19:10.783 { 00:19:10.783 "name": "pt1", 00:19:10.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:10.783 "is_configured": true, 00:19:10.783 "data_offset": 256, 00:19:10.783 "data_size": 7936 00:19:10.783 }, 00:19:10.783 { 00:19:10.783 "name": null, 00:19:10.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:10.783 "is_configured": false, 00:19:10.783 "data_offset": 256, 00:19:10.783 "data_size": 7936 00:19:10.783 } 00:19:10.783 ] 00:19:10.783 }' 00:19:10.783 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.783 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.042 [2024-11-04 11:51:36.506501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:11.042 [2024-11-04 11:51:36.506577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.042 [2024-11-04 11:51:36.506600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.042 [2024-11-04 11:51:36.506611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.042 [2024-11-04 11:51:36.506792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.042 [2024-11-04 11:51:36.506812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:11.042 [2024-11-04 11:51:36.506865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:11.042 [2024-11-04 11:51:36.506907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:11.042 [2024-11-04 11:51:36.507008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:11.042 [2024-11-04 11:51:36.507070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:11.042 [2024-11-04 11:51:36.507163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.042 [2024-11-04 11:51:36.507246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:11.042 [2024-11-04 11:51:36.507256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:11.042 [2024-11-04 11:51:36.507328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.042 pt2 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.042 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.300 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.300 "name": "raid_bdev1", 00:19:11.300 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:11.300 "strip_size_kb": 0, 00:19:11.300 "state": "online", 00:19:11.300 "raid_level": "raid1", 00:19:11.300 "superblock": true, 00:19:11.300 "num_base_bdevs": 2, 00:19:11.300 "num_base_bdevs_discovered": 2, 00:19:11.300 "num_base_bdevs_operational": 2, 00:19:11.300 "base_bdevs_list": [ 00:19:11.300 { 00:19:11.300 "name": "pt1", 00:19:11.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.300 "is_configured": true, 00:19:11.300 "data_offset": 256, 00:19:11.300 "data_size": 7936 00:19:11.300 }, 00:19:11.300 { 00:19:11.300 "name": "pt2", 00:19:11.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.300 "is_configured": true, 00:19:11.300 "data_offset": 256, 00:19:11.300 "data_size": 7936 00:19:11.300 } 00:19:11.300 ] 00:19:11.300 }' 00:19:11.300 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.300 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.557 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.558 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:11.558 [2024-11-04 11:51:36.981978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.558 11:51:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.558 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.558 "name": "raid_bdev1", 00:19:11.558 "aliases": [ 00:19:11.558 "694f02cf-5be9-478c-9efe-beadd72fe386" 00:19:11.558 ], 00:19:11.558 "product_name": "Raid Volume", 00:19:11.558 "block_size": 4128, 00:19:11.558 "num_blocks": 7936, 00:19:11.558 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:11.558 "md_size": 32, 00:19:11.558 "md_interleave": true, 00:19:11.558 "dif_type": 0, 00:19:11.558 "assigned_rate_limits": { 00:19:11.558 "rw_ios_per_sec": 0, 00:19:11.558 "rw_mbytes_per_sec": 0, 00:19:11.558 "r_mbytes_per_sec": 0, 00:19:11.558 "w_mbytes_per_sec": 0 00:19:11.558 }, 00:19:11.558 "claimed": false, 00:19:11.558 "zoned": false, 00:19:11.558 "supported_io_types": { 00:19:11.558 "read": true, 00:19:11.558 "write": true, 00:19:11.558 "unmap": false, 00:19:11.558 "flush": false, 00:19:11.558 "reset": true, 00:19:11.558 "nvme_admin": false, 00:19:11.558 "nvme_io": false, 00:19:11.558 "nvme_io_md": false, 00:19:11.558 "write_zeroes": true, 00:19:11.558 "zcopy": false, 00:19:11.558 "get_zone_info": false, 00:19:11.558 "zone_management": false, 00:19:11.558 "zone_append": false, 00:19:11.558 "compare": false, 00:19:11.558 "compare_and_write": false, 00:19:11.558 "abort": false, 00:19:11.558 "seek_hole": false, 00:19:11.558 "seek_data": false, 00:19:11.558 "copy": false, 00:19:11.558 "nvme_iov_md": false 00:19:11.558 }, 00:19:11.558 "memory_domains": [ 00:19:11.558 { 00:19:11.558 "dma_device_id": "system", 00:19:11.558 "dma_device_type": 1 00:19:11.558 }, 00:19:11.558 { 00:19:11.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.558 "dma_device_type": 2 00:19:11.558 }, 00:19:11.558 { 00:19:11.558 "dma_device_id": "system", 00:19:11.558 "dma_device_type": 1 00:19:11.558 }, 00:19:11.558 { 00:19:11.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.558 "dma_device_type": 2 00:19:11.558 } 00:19:11.558 ], 00:19:11.558 "driver_specific": { 00:19:11.558 "raid": { 00:19:11.558 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:11.558 "strip_size_kb": 0, 00:19:11.558 "state": "online", 00:19:11.558 "raid_level": "raid1", 00:19:11.558 "superblock": true, 00:19:11.558 "num_base_bdevs": 2, 00:19:11.558 "num_base_bdevs_discovered": 2, 00:19:11.558 "num_base_bdevs_operational": 2, 00:19:11.558 "base_bdevs_list": [ 00:19:11.558 { 00:19:11.558 "name": "pt1", 00:19:11.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:11.558 "is_configured": true, 00:19:11.558 "data_offset": 256, 00:19:11.558 "data_size": 7936 00:19:11.558 }, 00:19:11.558 { 00:19:11.558 "name": "pt2", 00:19:11.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.558 "is_configured": true, 00:19:11.558 "data_offset": 256, 00:19:11.558 "data_size": 7936 00:19:11.558 } 00:19:11.558 ] 00:19:11.558 } 00:19:11.558 } 00:19:11.558 }' 00:19:11.558 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:11.558 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:11.558 pt2' 00:19:11.558 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:11.816 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.817 [2024-11-04 11:51:37.217672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 694f02cf-5be9-478c-9efe-beadd72fe386 '!=' 694f02cf-5be9-478c-9efe-beadd72fe386 ']' 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.817 [2024-11-04 11:51:37.265324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.817 "name": "raid_bdev1", 00:19:11.817 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:11.817 "strip_size_kb": 0, 00:19:11.817 "state": "online", 00:19:11.817 "raid_level": "raid1", 00:19:11.817 "superblock": true, 00:19:11.817 "num_base_bdevs": 2, 00:19:11.817 "num_base_bdevs_discovered": 1, 00:19:11.817 "num_base_bdevs_operational": 1, 00:19:11.817 "base_bdevs_list": [ 00:19:11.817 { 00:19:11.817 "name": null, 00:19:11.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.817 "is_configured": false, 00:19:11.817 "data_offset": 0, 00:19:11.817 "data_size": 7936 00:19:11.817 }, 00:19:11.817 { 00:19:11.817 "name": "pt2", 00:19:11.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:11.817 "is_configured": true, 00:19:11.817 "data_offset": 256, 00:19:11.817 "data_size": 7936 00:19:11.817 } 00:19:11.817 ] 00:19:11.817 }' 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.817 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.386 [2024-11-04 11:51:37.748455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.386 [2024-11-04 11:51:37.748491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.386 [2024-11-04 11:51:37.748584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.386 [2024-11-04 11:51:37.748638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.386 [2024-11-04 11:51:37.748652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.386 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.386 [2024-11-04 11:51:37.828331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:12.386 [2024-11-04 11:51:37.828482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.386 [2024-11-04 11:51:37.828525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:12.386 [2024-11-04 11:51:37.828571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.386 [2024-11-04 11:51:37.830805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.386 [2024-11-04 11:51:37.830906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:12.386 [2024-11-04 11:51:37.830997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:12.386 [2024-11-04 11:51:37.831114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.386 [2024-11-04 11:51:37.831230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:12.386 [2024-11-04 11:51:37.831285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:12.387 [2024-11-04 11:51:37.831449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:12.387 [2024-11-04 11:51:37.831597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:12.387 [2024-11-04 11:51:37.831641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:12.387 [2024-11-04 11:51:37.831801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.387 pt2 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.387 "name": "raid_bdev1", 00:19:12.387 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:12.387 "strip_size_kb": 0, 00:19:12.387 "state": "online", 00:19:12.387 "raid_level": "raid1", 00:19:12.387 "superblock": true, 00:19:12.387 "num_base_bdevs": 2, 00:19:12.387 "num_base_bdevs_discovered": 1, 00:19:12.387 "num_base_bdevs_operational": 1, 00:19:12.387 "base_bdevs_list": [ 00:19:12.387 { 00:19:12.387 "name": null, 00:19:12.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.387 "is_configured": false, 00:19:12.387 "data_offset": 256, 00:19:12.387 "data_size": 7936 00:19:12.387 }, 00:19:12.387 { 00:19:12.387 "name": "pt2", 00:19:12.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.387 "is_configured": true, 00:19:12.387 "data_offset": 256, 00:19:12.387 "data_size": 7936 00:19:12.387 } 00:19:12.387 ] 00:19:12.387 }' 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.387 11:51:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 [2024-11-04 11:51:38.319606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.956 [2024-11-04 11:51:38.319686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.956 [2024-11-04 11:51:38.319837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.956 [2024-11-04 11:51:38.319935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.956 [2024-11-04 11:51:38.319987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 [2024-11-04 11:51:38.379577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:12.956 [2024-11-04 11:51:38.379653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.956 [2024-11-04 11:51:38.379678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:12.956 [2024-11-04 11:51:38.379688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.956 [2024-11-04 11:51:38.381817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.956 [2024-11-04 11:51:38.381858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:12.956 [2024-11-04 11:51:38.381922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:12.956 [2024-11-04 11:51:38.381980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:12.956 [2024-11-04 11:51:38.382083] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:12.956 [2024-11-04 11:51:38.382100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.956 [2024-11-04 11:51:38.382121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:12.956 [2024-11-04 11:51:38.382198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:12.956 [2024-11-04 11:51:38.382279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:12.956 [2024-11-04 11:51:38.382288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:12.956 [2024-11-04 11:51:38.382357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:12.956 [2024-11-04 11:51:38.382458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:12.956 [2024-11-04 11:51:38.382497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:12.956 [2024-11-04 11:51:38.382584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.956 pt1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.956 "name": "raid_bdev1", 00:19:12.956 "uuid": "694f02cf-5be9-478c-9efe-beadd72fe386", 00:19:12.956 "strip_size_kb": 0, 00:19:12.956 "state": "online", 00:19:12.956 "raid_level": "raid1", 00:19:12.956 "superblock": true, 00:19:12.956 "num_base_bdevs": 2, 00:19:12.956 "num_base_bdevs_discovered": 1, 00:19:12.956 "num_base_bdevs_operational": 1, 00:19:12.956 "base_bdevs_list": [ 00:19:12.956 { 00:19:12.956 "name": null, 00:19:12.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.956 "is_configured": false, 00:19:12.956 "data_offset": 256, 00:19:12.956 "data_size": 7936 00:19:12.956 }, 00:19:12.956 { 00:19:12.956 "name": "pt2", 00:19:12.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:12.956 "is_configured": true, 00:19:12.956 "data_offset": 256, 00:19:12.956 "data_size": 7936 00:19:12.956 } 00:19:12.956 ] 00:19:12.956 }' 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.956 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.525 [2024-11-04 11:51:38.894873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 694f02cf-5be9-478c-9efe-beadd72fe386 '!=' 694f02cf-5be9-478c-9efe-beadd72fe386 ']' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88956 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88956 ']' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88956 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88956 00:19:13.525 killing process with pid 88956 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88956' 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 88956 00:19:13.525 [2024-11-04 11:51:38.974251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.525 [2024-11-04 11:51:38.974348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.525 [2024-11-04 11:51:38.974411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.525 [2024-11-04 11:51:38.974427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:13.525 11:51:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 88956 00:19:13.785 [2024-11-04 11:51:39.192756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.169 ************************************ 00:19:15.169 END TEST raid_superblock_test_md_interleaved 00:19:15.169 ************************************ 00:19:15.169 11:51:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:15.169 00:19:15.169 real 0m6.335s 00:19:15.169 user 0m9.636s 00:19:15.169 sys 0m1.141s 00:19:15.169 11:51:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:15.169 11:51:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.169 11:51:40 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:15.169 11:51:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:15.169 11:51:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:15.169 11:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.169 ************************************ 00:19:15.169 START TEST raid_rebuild_test_sb_md_interleaved 00:19:15.169 ************************************ 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89279 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89279 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89279 ']' 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.169 11:51:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.169 [2024-11-04 11:51:40.478406] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:15.169 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:15.169 Zero copy mechanism will not be used. 00:19:15.170 [2024-11-04 11:51:40.478602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89279 ] 00:19:15.170 [2024-11-04 11:51:40.656009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.428 [2024-11-04 11:51:40.774621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.688 [2024-11-04 11:51:40.982004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.688 [2024-11-04 11:51:40.982173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 BaseBdev1_malloc 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 [2024-11-04 11:51:41.364834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.948 [2024-11-04 11:51:41.364897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.948 [2024-11-04 11:51:41.364920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:15.948 [2024-11-04 11:51:41.364932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.948 [2024-11-04 11:51:41.366937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.948 [2024-11-04 11:51:41.367008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.948 BaseBdev1 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 BaseBdev2_malloc 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 [2024-11-04 11:51:41.420677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:15.948 [2024-11-04 11:51:41.420743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.948 [2024-11-04 11:51:41.420765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:15.948 [2024-11-04 11:51:41.420778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.948 [2024-11-04 11:51:41.422685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.948 [2024-11-04 11:51:41.422786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:15.948 BaseBdev2 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.948 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.208 spare_malloc 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.208 spare_delay 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.208 [2024-11-04 11:51:41.499641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.208 [2024-11-04 11:51:41.499713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.208 [2024-11-04 11:51:41.499740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:16.208 [2024-11-04 11:51:41.499752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.208 [2024-11-04 11:51:41.501680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.208 [2024-11-04 11:51:41.501719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.208 spare 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.208 [2024-11-04 11:51:41.511651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.208 [2024-11-04 11:51:41.513497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.208 [2024-11-04 11:51:41.513758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:16.208 [2024-11-04 11:51:41.513777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:16.208 [2024-11-04 11:51:41.513874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:16.208 [2024-11-04 11:51:41.513948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:16.208 [2024-11-04 11:51:41.513956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:16.208 [2024-11-04 11:51:41.514033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.208 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.208 "name": "raid_bdev1", 00:19:16.208 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:16.208 "strip_size_kb": 0, 00:19:16.208 "state": "online", 00:19:16.208 "raid_level": "raid1", 00:19:16.208 "superblock": true, 00:19:16.208 "num_base_bdevs": 2, 00:19:16.208 "num_base_bdevs_discovered": 2, 00:19:16.208 "num_base_bdevs_operational": 2, 00:19:16.208 "base_bdevs_list": [ 00:19:16.208 { 00:19:16.208 "name": "BaseBdev1", 00:19:16.208 "uuid": "03f0c1d0-ee93-5f63-8e39-0f3d08018014", 00:19:16.208 "is_configured": true, 00:19:16.208 "data_offset": 256, 00:19:16.208 "data_size": 7936 00:19:16.208 }, 00:19:16.208 { 00:19:16.208 "name": "BaseBdev2", 00:19:16.208 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:16.208 "is_configured": true, 00:19:16.209 "data_offset": 256, 00:19:16.209 "data_size": 7936 00:19:16.209 } 00:19:16.209 ] 00:19:16.209 }' 00:19:16.209 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.209 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.778 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:16.778 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:16.778 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.778 11:51:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.778 [2024-11-04 11:51:41.995158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.778 [2024-11-04 11:51:42.090684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.778 "name": "raid_bdev1", 00:19:16.778 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:16.778 "strip_size_kb": 0, 00:19:16.778 "state": "online", 00:19:16.778 "raid_level": "raid1", 00:19:16.778 "superblock": true, 00:19:16.778 "num_base_bdevs": 2, 00:19:16.778 "num_base_bdevs_discovered": 1, 00:19:16.778 "num_base_bdevs_operational": 1, 00:19:16.778 "base_bdevs_list": [ 00:19:16.778 { 00:19:16.778 "name": null, 00:19:16.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.778 "is_configured": false, 00:19:16.778 "data_offset": 0, 00:19:16.778 "data_size": 7936 00:19:16.778 }, 00:19:16.778 { 00:19:16.778 "name": "BaseBdev2", 00:19:16.778 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:16.778 "is_configured": true, 00:19:16.778 "data_offset": 256, 00:19:16.778 "data_size": 7936 00:19:16.778 } 00:19:16.778 ] 00:19:16.778 }' 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.778 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:17.347 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.347 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 [2024-11-04 11:51:42.569933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.347 [2024-11-04 11:51:42.586957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:17.347 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.347 11:51:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:17.347 [2024-11-04 11:51:42.588832] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.287 "name": "raid_bdev1", 00:19:18.287 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:18.287 "strip_size_kb": 0, 00:19:18.287 "state": "online", 00:19:18.287 "raid_level": "raid1", 00:19:18.287 "superblock": true, 00:19:18.287 "num_base_bdevs": 2, 00:19:18.287 "num_base_bdevs_discovered": 2, 00:19:18.287 "num_base_bdevs_operational": 2, 00:19:18.287 "process": { 00:19:18.287 "type": "rebuild", 00:19:18.287 "target": "spare", 00:19:18.287 "progress": { 00:19:18.287 "blocks": 2560, 00:19:18.287 "percent": 32 00:19:18.287 } 00:19:18.287 }, 00:19:18.287 "base_bdevs_list": [ 00:19:18.287 { 00:19:18.287 "name": "spare", 00:19:18.287 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:18.287 "is_configured": true, 00:19:18.287 "data_offset": 256, 00:19:18.287 "data_size": 7936 00:19:18.287 }, 00:19:18.287 { 00:19:18.287 "name": "BaseBdev2", 00:19:18.287 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:18.287 "is_configured": true, 00:19:18.287 "data_offset": 256, 00:19:18.287 "data_size": 7936 00:19:18.287 } 00:19:18.287 ] 00:19:18.287 }' 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.287 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.287 [2024-11-04 11:51:43.748264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.287 [2024-11-04 11:51:43.794586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:18.287 [2024-11-04 11:51:43.794652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.287 [2024-11-04 11:51:43.794667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.287 [2024-11-04 11:51:43.794680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.546 "name": "raid_bdev1", 00:19:18.546 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:18.546 "strip_size_kb": 0, 00:19:18.546 "state": "online", 00:19:18.546 "raid_level": "raid1", 00:19:18.546 "superblock": true, 00:19:18.546 "num_base_bdevs": 2, 00:19:18.546 "num_base_bdevs_discovered": 1, 00:19:18.546 "num_base_bdevs_operational": 1, 00:19:18.546 "base_bdevs_list": [ 00:19:18.546 { 00:19:18.546 "name": null, 00:19:18.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.546 "is_configured": false, 00:19:18.546 "data_offset": 0, 00:19:18.546 "data_size": 7936 00:19:18.546 }, 00:19:18.546 { 00:19:18.546 "name": "BaseBdev2", 00:19:18.546 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:18.546 "is_configured": true, 00:19:18.546 "data_offset": 256, 00:19:18.546 "data_size": 7936 00:19:18.546 } 00:19:18.546 ] 00:19:18.546 }' 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.546 11:51:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.806 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.065 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.065 "name": "raid_bdev1", 00:19:19.065 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:19.065 "strip_size_kb": 0, 00:19:19.065 "state": "online", 00:19:19.065 "raid_level": "raid1", 00:19:19.065 "superblock": true, 00:19:19.065 "num_base_bdevs": 2, 00:19:19.065 "num_base_bdevs_discovered": 1, 00:19:19.066 "num_base_bdevs_operational": 1, 00:19:19.066 "base_bdevs_list": [ 00:19:19.066 { 00:19:19.066 "name": null, 00:19:19.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.066 "is_configured": false, 00:19:19.066 "data_offset": 0, 00:19:19.066 "data_size": 7936 00:19:19.066 }, 00:19:19.066 { 00:19:19.066 "name": "BaseBdev2", 00:19:19.066 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:19.066 "is_configured": true, 00:19:19.066 "data_offset": 256, 00:19:19.066 "data_size": 7936 00:19:19.066 } 00:19:19.066 ] 00:19:19.066 }' 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.066 [2024-11-04 11:51:44.420543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.066 [2024-11-04 11:51:44.439105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.066 11:51:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:19.066 [2024-11-04 11:51:44.441230] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.004 "name": "raid_bdev1", 00:19:20.004 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:20.004 "strip_size_kb": 0, 00:19:20.004 "state": "online", 00:19:20.004 "raid_level": "raid1", 00:19:20.004 "superblock": true, 00:19:20.004 "num_base_bdevs": 2, 00:19:20.004 "num_base_bdevs_discovered": 2, 00:19:20.004 "num_base_bdevs_operational": 2, 00:19:20.004 "process": { 00:19:20.004 "type": "rebuild", 00:19:20.004 "target": "spare", 00:19:20.004 "progress": { 00:19:20.004 "blocks": 2560, 00:19:20.004 "percent": 32 00:19:20.004 } 00:19:20.004 }, 00:19:20.004 "base_bdevs_list": [ 00:19:20.004 { 00:19:20.004 "name": "spare", 00:19:20.004 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:20.004 "is_configured": true, 00:19:20.004 "data_offset": 256, 00:19:20.004 "data_size": 7936 00:19:20.004 }, 00:19:20.004 { 00:19:20.004 "name": "BaseBdev2", 00:19:20.004 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:20.004 "is_configured": true, 00:19:20.004 "data_offset": 256, 00:19:20.004 "data_size": 7936 00:19:20.004 } 00:19:20.004 ] 00:19:20.004 }' 00:19:20.004 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:20.265 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=747 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.265 "name": "raid_bdev1", 00:19:20.265 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:20.265 "strip_size_kb": 0, 00:19:20.265 "state": "online", 00:19:20.265 "raid_level": "raid1", 00:19:20.265 "superblock": true, 00:19:20.265 "num_base_bdevs": 2, 00:19:20.265 "num_base_bdevs_discovered": 2, 00:19:20.265 "num_base_bdevs_operational": 2, 00:19:20.265 "process": { 00:19:20.265 "type": "rebuild", 00:19:20.265 "target": "spare", 00:19:20.265 "progress": { 00:19:20.265 "blocks": 2816, 00:19:20.265 "percent": 35 00:19:20.265 } 00:19:20.265 }, 00:19:20.265 "base_bdevs_list": [ 00:19:20.265 { 00:19:20.265 "name": "spare", 00:19:20.265 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:20.265 "is_configured": true, 00:19:20.265 "data_offset": 256, 00:19:20.265 "data_size": 7936 00:19:20.265 }, 00:19:20.265 { 00:19:20.265 "name": "BaseBdev2", 00:19:20.265 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:20.265 "is_configured": true, 00:19:20.265 "data_offset": 256, 00:19:20.265 "data_size": 7936 00:19:20.265 } 00:19:20.265 ] 00:19:20.265 }' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.265 11:51:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.645 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.645 "name": "raid_bdev1", 00:19:21.645 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:21.645 "strip_size_kb": 0, 00:19:21.645 "state": "online", 00:19:21.645 "raid_level": "raid1", 00:19:21.645 "superblock": true, 00:19:21.645 "num_base_bdevs": 2, 00:19:21.645 "num_base_bdevs_discovered": 2, 00:19:21.645 "num_base_bdevs_operational": 2, 00:19:21.645 "process": { 00:19:21.645 "type": "rebuild", 00:19:21.645 "target": "spare", 00:19:21.645 "progress": { 00:19:21.645 "blocks": 5632, 00:19:21.645 "percent": 70 00:19:21.645 } 00:19:21.645 }, 00:19:21.645 "base_bdevs_list": [ 00:19:21.646 { 00:19:21.646 "name": "spare", 00:19:21.646 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:21.646 "is_configured": true, 00:19:21.646 "data_offset": 256, 00:19:21.646 "data_size": 7936 00:19:21.646 }, 00:19:21.646 { 00:19:21.646 "name": "BaseBdev2", 00:19:21.646 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:21.646 "is_configured": true, 00:19:21.646 "data_offset": 256, 00:19:21.646 "data_size": 7936 00:19:21.646 } 00:19:21.646 ] 00:19:21.646 }' 00:19:21.646 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.646 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.646 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.646 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.646 11:51:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.214 [2024-11-04 11:51:47.556120] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:22.214 [2024-11-04 11:51:47.556204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:22.214 [2024-11-04 11:51:47.556359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.473 "name": "raid_bdev1", 00:19:22.473 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:22.473 "strip_size_kb": 0, 00:19:22.473 "state": "online", 00:19:22.473 "raid_level": "raid1", 00:19:22.473 "superblock": true, 00:19:22.473 "num_base_bdevs": 2, 00:19:22.473 "num_base_bdevs_discovered": 2, 00:19:22.473 "num_base_bdevs_operational": 2, 00:19:22.473 "base_bdevs_list": [ 00:19:22.473 { 00:19:22.473 "name": "spare", 00:19:22.473 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:22.473 "is_configured": true, 00:19:22.473 "data_offset": 256, 00:19:22.473 "data_size": 7936 00:19:22.473 }, 00:19:22.473 { 00:19:22.473 "name": "BaseBdev2", 00:19:22.473 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:22.473 "is_configured": true, 00:19:22.473 "data_offset": 256, 00:19:22.473 "data_size": 7936 00:19:22.473 } 00:19:22.473 ] 00:19:22.473 }' 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:22.473 11:51:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.733 "name": "raid_bdev1", 00:19:22.733 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:22.733 "strip_size_kb": 0, 00:19:22.733 "state": "online", 00:19:22.733 "raid_level": "raid1", 00:19:22.733 "superblock": true, 00:19:22.733 "num_base_bdevs": 2, 00:19:22.733 "num_base_bdevs_discovered": 2, 00:19:22.733 "num_base_bdevs_operational": 2, 00:19:22.733 "base_bdevs_list": [ 00:19:22.733 { 00:19:22.733 "name": "spare", 00:19:22.733 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 256, 00:19:22.733 "data_size": 7936 00:19:22.733 }, 00:19:22.733 { 00:19:22.733 "name": "BaseBdev2", 00:19:22.733 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 256, 00:19:22.733 "data_size": 7936 00:19:22.733 } 00:19:22.733 ] 00:19:22.733 }' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.733 "name": "raid_bdev1", 00:19:22.733 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:22.733 "strip_size_kb": 0, 00:19:22.733 "state": "online", 00:19:22.733 "raid_level": "raid1", 00:19:22.733 "superblock": true, 00:19:22.733 "num_base_bdevs": 2, 00:19:22.733 "num_base_bdevs_discovered": 2, 00:19:22.733 "num_base_bdevs_operational": 2, 00:19:22.733 "base_bdevs_list": [ 00:19:22.733 { 00:19:22.733 "name": "spare", 00:19:22.733 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 256, 00:19:22.733 "data_size": 7936 00:19:22.733 }, 00:19:22.733 { 00:19:22.733 "name": "BaseBdev2", 00:19:22.733 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:22.733 "is_configured": true, 00:19:22.733 "data_offset": 256, 00:19:22.733 "data_size": 7936 00:19:22.733 } 00:19:22.733 ] 00:19:22.733 }' 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.733 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 [2024-11-04 11:51:48.624601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.302 [2024-11-04 11:51:48.624640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.302 [2024-11-04 11:51:48.624742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.302 [2024-11-04 11:51:48.624824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.302 [2024-11-04 11:51:48.624839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 [2024-11-04 11:51:48.696496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.302 [2024-11-04 11:51:48.696992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.302 [2024-11-04 11:51:48.697087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:23.302 [2024-11-04 11:51:48.697189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.302 [2024-11-04 11:51:48.699533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.302 [2024-11-04 11:51:48.699728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.302 [2024-11-04 11:51:48.699908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:23.302 [2024-11-04 11:51:48.699990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.302 [2024-11-04 11:51:48.700158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.302 spare 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.302 [2024-11-04 11:51:48.800090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:23.302 [2024-11-04 11:51:48.800212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:23.302 [2024-11-04 11:51:48.800437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:23.302 [2024-11-04 11:51:48.800631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:23.302 [2024-11-04 11:51:48.800650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:23.302 [2024-11-04 11:51:48.800793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.302 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.561 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.561 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.561 "name": "raid_bdev1", 00:19:23.561 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:23.561 "strip_size_kb": 0, 00:19:23.561 "state": "online", 00:19:23.561 "raid_level": "raid1", 00:19:23.561 "superblock": true, 00:19:23.561 "num_base_bdevs": 2, 00:19:23.561 "num_base_bdevs_discovered": 2, 00:19:23.561 "num_base_bdevs_operational": 2, 00:19:23.561 "base_bdevs_list": [ 00:19:23.561 { 00:19:23.561 "name": "spare", 00:19:23.561 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:23.561 "is_configured": true, 00:19:23.561 "data_offset": 256, 00:19:23.561 "data_size": 7936 00:19:23.561 }, 00:19:23.561 { 00:19:23.561 "name": "BaseBdev2", 00:19:23.561 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:23.561 "is_configured": true, 00:19:23.561 "data_offset": 256, 00:19:23.561 "data_size": 7936 00:19:23.561 } 00:19:23.561 ] 00:19:23.561 }' 00:19:23.561 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.561 11:51:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.821 "name": "raid_bdev1", 00:19:23.821 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:23.821 "strip_size_kb": 0, 00:19:23.821 "state": "online", 00:19:23.821 "raid_level": "raid1", 00:19:23.821 "superblock": true, 00:19:23.821 "num_base_bdevs": 2, 00:19:23.821 "num_base_bdevs_discovered": 2, 00:19:23.821 "num_base_bdevs_operational": 2, 00:19:23.821 "base_bdevs_list": [ 00:19:23.821 { 00:19:23.821 "name": "spare", 00:19:23.821 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:23.821 "is_configured": true, 00:19:23.821 "data_offset": 256, 00:19:23.821 "data_size": 7936 00:19:23.821 }, 00:19:23.821 { 00:19:23.821 "name": "BaseBdev2", 00:19:23.821 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:23.821 "is_configured": true, 00:19:23.821 "data_offset": 256, 00:19:23.821 "data_size": 7936 00:19:23.821 } 00:19:23.821 ] 00:19:23.821 }' 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.821 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.080 [2024-11-04 11:51:49.348098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.080 "name": "raid_bdev1", 00:19:24.080 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:24.080 "strip_size_kb": 0, 00:19:24.080 "state": "online", 00:19:24.080 "raid_level": "raid1", 00:19:24.080 "superblock": true, 00:19:24.080 "num_base_bdevs": 2, 00:19:24.080 "num_base_bdevs_discovered": 1, 00:19:24.080 "num_base_bdevs_operational": 1, 00:19:24.080 "base_bdevs_list": [ 00:19:24.080 { 00:19:24.080 "name": null, 00:19:24.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.080 "is_configured": false, 00:19:24.080 "data_offset": 0, 00:19:24.080 "data_size": 7936 00:19:24.080 }, 00:19:24.080 { 00:19:24.080 "name": "BaseBdev2", 00:19:24.080 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:24.080 "is_configured": true, 00:19:24.080 "data_offset": 256, 00:19:24.080 "data_size": 7936 00:19:24.080 } 00:19:24.080 ] 00:19:24.080 }' 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.080 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.339 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.339 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.339 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.339 [2024-11-04 11:51:49.819336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.339 [2024-11-04 11:51:49.819690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.339 [2024-11-04 11:51:49.819767] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:24.339 [2024-11-04 11:51:49.819868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.339 [2024-11-04 11:51:49.836838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:24.339 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.339 11:51:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:24.339 [2024-11-04 11:51:49.838773] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.717 "name": "raid_bdev1", 00:19:25.717 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:25.717 "strip_size_kb": 0, 00:19:25.717 "state": "online", 00:19:25.717 "raid_level": "raid1", 00:19:25.717 "superblock": true, 00:19:25.717 "num_base_bdevs": 2, 00:19:25.717 "num_base_bdevs_discovered": 2, 00:19:25.717 "num_base_bdevs_operational": 2, 00:19:25.717 "process": { 00:19:25.717 "type": "rebuild", 00:19:25.717 "target": "spare", 00:19:25.717 "progress": { 00:19:25.717 "blocks": 2560, 00:19:25.717 "percent": 32 00:19:25.717 } 00:19:25.717 }, 00:19:25.717 "base_bdevs_list": [ 00:19:25.717 { 00:19:25.717 "name": "spare", 00:19:25.717 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:25.717 "is_configured": true, 00:19:25.717 "data_offset": 256, 00:19:25.717 "data_size": 7936 00:19:25.717 }, 00:19:25.717 { 00:19:25.717 "name": "BaseBdev2", 00:19:25.717 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:25.717 "is_configured": true, 00:19:25.717 "data_offset": 256, 00:19:25.717 "data_size": 7936 00:19:25.717 } 00:19:25.717 ] 00:19:25.717 }' 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.717 11:51:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.717 [2024-11-04 11:51:50.982314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.717 [2024-11-04 11:51:51.044452] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.717 [2024-11-04 11:51:51.044523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.718 [2024-11-04 11:51:51.044538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.718 [2024-11-04 11:51:51.044547] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.718 "name": "raid_bdev1", 00:19:25.718 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:25.718 "strip_size_kb": 0, 00:19:25.718 "state": "online", 00:19:25.718 "raid_level": "raid1", 00:19:25.718 "superblock": true, 00:19:25.718 "num_base_bdevs": 2, 00:19:25.718 "num_base_bdevs_discovered": 1, 00:19:25.718 "num_base_bdevs_operational": 1, 00:19:25.718 "base_bdevs_list": [ 00:19:25.718 { 00:19:25.718 "name": null, 00:19:25.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.718 "is_configured": false, 00:19:25.718 "data_offset": 0, 00:19:25.718 "data_size": 7936 00:19:25.718 }, 00:19:25.718 { 00:19:25.718 "name": "BaseBdev2", 00:19:25.718 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:25.718 "is_configured": true, 00:19:25.718 "data_offset": 256, 00:19:25.718 "data_size": 7936 00:19:25.718 } 00:19:25.718 ] 00:19:25.718 }' 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.718 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:26.282 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.282 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.282 [2024-11-04 11:51:51.512496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:26.282 [2024-11-04 11:51:51.512638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.282 [2024-11-04 11:51:51.512718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:26.282 [2024-11-04 11:51:51.512757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.282 [2024-11-04 11:51:51.513028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.282 [2024-11-04 11:51:51.513096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:26.282 [2024-11-04 11:51:51.513213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:26.282 [2024-11-04 11:51:51.513257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.282 [2024-11-04 11:51:51.513316] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:26.282 [2024-11-04 11:51:51.513422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.282 [2024-11-04 11:51:51.529899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:26.282 spare 00:19:26.282 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.282 11:51:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:26.282 [2024-11-04 11:51:51.531865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.230 "name": "raid_bdev1", 00:19:27.230 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:27.230 "strip_size_kb": 0, 00:19:27.230 "state": "online", 00:19:27.230 "raid_level": "raid1", 00:19:27.230 "superblock": true, 00:19:27.230 "num_base_bdevs": 2, 00:19:27.230 "num_base_bdevs_discovered": 2, 00:19:27.230 "num_base_bdevs_operational": 2, 00:19:27.230 "process": { 00:19:27.230 "type": "rebuild", 00:19:27.230 "target": "spare", 00:19:27.230 "progress": { 00:19:27.230 "blocks": 2560, 00:19:27.230 "percent": 32 00:19:27.230 } 00:19:27.230 }, 00:19:27.230 "base_bdevs_list": [ 00:19:27.230 { 00:19:27.230 "name": "spare", 00:19:27.230 "uuid": "7e96e2ce-f045-56f6-bbae-b6520292e7ad", 00:19:27.230 "is_configured": true, 00:19:27.230 "data_offset": 256, 00:19:27.230 "data_size": 7936 00:19:27.230 }, 00:19:27.230 { 00:19:27.230 "name": "BaseBdev2", 00:19:27.230 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:27.230 "is_configured": true, 00:19:27.230 "data_offset": 256, 00:19:27.230 "data_size": 7936 00:19:27.230 } 00:19:27.230 ] 00:19:27.230 }' 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.230 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.230 [2024-11-04 11:51:52.699312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.230 [2024-11-04 11:51:52.737527] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:27.230 [2024-11-04 11:51:52.737668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.230 [2024-11-04 11:51:52.737689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.230 [2024-11-04 11:51:52.737696] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.488 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.489 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.489 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.489 "name": "raid_bdev1", 00:19:27.489 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:27.489 "strip_size_kb": 0, 00:19:27.489 "state": "online", 00:19:27.489 "raid_level": "raid1", 00:19:27.489 "superblock": true, 00:19:27.489 "num_base_bdevs": 2, 00:19:27.489 "num_base_bdevs_discovered": 1, 00:19:27.489 "num_base_bdevs_operational": 1, 00:19:27.489 "base_bdevs_list": [ 00:19:27.489 { 00:19:27.489 "name": null, 00:19:27.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.489 "is_configured": false, 00:19:27.489 "data_offset": 0, 00:19:27.489 "data_size": 7936 00:19:27.489 }, 00:19:27.489 { 00:19:27.489 "name": "BaseBdev2", 00:19:27.489 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:27.489 "is_configured": true, 00:19:27.489 "data_offset": 256, 00:19:27.489 "data_size": 7936 00:19:27.489 } 00:19:27.489 ] 00:19:27.489 }' 00:19:27.489 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.489 11:51:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.747 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.005 "name": "raid_bdev1", 00:19:28.005 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:28.005 "strip_size_kb": 0, 00:19:28.005 "state": "online", 00:19:28.005 "raid_level": "raid1", 00:19:28.005 "superblock": true, 00:19:28.005 "num_base_bdevs": 2, 00:19:28.005 "num_base_bdevs_discovered": 1, 00:19:28.005 "num_base_bdevs_operational": 1, 00:19:28.005 "base_bdevs_list": [ 00:19:28.005 { 00:19:28.005 "name": null, 00:19:28.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.005 "is_configured": false, 00:19:28.005 "data_offset": 0, 00:19:28.005 "data_size": 7936 00:19:28.005 }, 00:19:28.005 { 00:19:28.005 "name": "BaseBdev2", 00:19:28.005 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:28.005 "is_configured": true, 00:19:28.005 "data_offset": 256, 00:19:28.005 "data_size": 7936 00:19:28.005 } 00:19:28.005 ] 00:19:28.005 }' 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.005 [2024-11-04 11:51:53.392831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:28.005 [2024-11-04 11:51:53.392896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.005 [2024-11-04 11:51:53.392923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:28.005 [2024-11-04 11:51:53.392933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.005 [2024-11-04 11:51:53.393114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.005 [2024-11-04 11:51:53.393126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.005 [2024-11-04 11:51:53.393187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:28.005 [2024-11-04 11:51:53.393201] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:28.005 [2024-11-04 11:51:53.393212] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:28.005 [2024-11-04 11:51:53.393223] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:28.005 BaseBdev1 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.005 11:51:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.939 "name": "raid_bdev1", 00:19:28.939 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:28.939 "strip_size_kb": 0, 00:19:28.939 "state": "online", 00:19:28.939 "raid_level": "raid1", 00:19:28.939 "superblock": true, 00:19:28.939 "num_base_bdevs": 2, 00:19:28.939 "num_base_bdevs_discovered": 1, 00:19:28.939 "num_base_bdevs_operational": 1, 00:19:28.939 "base_bdevs_list": [ 00:19:28.939 { 00:19:28.939 "name": null, 00:19:28.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.939 "is_configured": false, 00:19:28.939 "data_offset": 0, 00:19:28.939 "data_size": 7936 00:19:28.939 }, 00:19:28.939 { 00:19:28.939 "name": "BaseBdev2", 00:19:28.939 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:28.939 "is_configured": true, 00:19:28.939 "data_offset": 256, 00:19:28.939 "data_size": 7936 00:19:28.939 } 00:19:28.939 ] 00:19:28.939 }' 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.939 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.510 "name": "raid_bdev1", 00:19:29.510 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:29.510 "strip_size_kb": 0, 00:19:29.510 "state": "online", 00:19:29.510 "raid_level": "raid1", 00:19:29.510 "superblock": true, 00:19:29.510 "num_base_bdevs": 2, 00:19:29.510 "num_base_bdevs_discovered": 1, 00:19:29.510 "num_base_bdevs_operational": 1, 00:19:29.510 "base_bdevs_list": [ 00:19:29.510 { 00:19:29.510 "name": null, 00:19:29.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.510 "is_configured": false, 00:19:29.510 "data_offset": 0, 00:19:29.510 "data_size": 7936 00:19:29.510 }, 00:19:29.510 { 00:19:29.510 "name": "BaseBdev2", 00:19:29.510 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:29.510 "is_configured": true, 00:19:29.510 "data_offset": 256, 00:19:29.510 "data_size": 7936 00:19:29.510 } 00:19:29.510 ] 00:19:29.510 }' 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.510 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.511 [2024-11-04 11:51:54.950279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.511 [2024-11-04 11:51:54.950526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:29.511 [2024-11-04 11:51:54.950586] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:29.511 request: 00:19:29.511 { 00:19:29.511 "base_bdev": "BaseBdev1", 00:19:29.511 "raid_bdev": "raid_bdev1", 00:19:29.511 "method": "bdev_raid_add_base_bdev", 00:19:29.511 "req_id": 1 00:19:29.511 } 00:19:29.511 Got JSON-RPC error response 00:19:29.511 response: 00:19:29.511 { 00:19:29.511 "code": -22, 00:19:29.511 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:29.511 } 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.511 11:51:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.449 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.710 11:51:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.710 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.710 "name": "raid_bdev1", 00:19:30.710 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:30.710 "strip_size_kb": 0, 00:19:30.710 "state": "online", 00:19:30.710 "raid_level": "raid1", 00:19:30.710 "superblock": true, 00:19:30.710 "num_base_bdevs": 2, 00:19:30.710 "num_base_bdevs_discovered": 1, 00:19:30.710 "num_base_bdevs_operational": 1, 00:19:30.710 "base_bdevs_list": [ 00:19:30.710 { 00:19:30.710 "name": null, 00:19:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.710 "is_configured": false, 00:19:30.710 "data_offset": 0, 00:19:30.710 "data_size": 7936 00:19:30.710 }, 00:19:30.710 { 00:19:30.710 "name": "BaseBdev2", 00:19:30.710 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:30.710 "is_configured": true, 00:19:30.710 "data_offset": 256, 00:19:30.710 "data_size": 7936 00:19:30.710 } 00:19:30.710 ] 00:19:30.710 }' 00:19:30.710 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.710 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.969 "name": "raid_bdev1", 00:19:30.969 "uuid": "ebddce86-3941-4458-b845-797bfcccdae5", 00:19:30.969 "strip_size_kb": 0, 00:19:30.969 "state": "online", 00:19:30.969 "raid_level": "raid1", 00:19:30.969 "superblock": true, 00:19:30.969 "num_base_bdevs": 2, 00:19:30.969 "num_base_bdevs_discovered": 1, 00:19:30.969 "num_base_bdevs_operational": 1, 00:19:30.969 "base_bdevs_list": [ 00:19:30.969 { 00:19:30.969 "name": null, 00:19:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.969 "is_configured": false, 00:19:30.969 "data_offset": 0, 00:19:30.969 "data_size": 7936 00:19:30.969 }, 00:19:30.969 { 00:19:30.969 "name": "BaseBdev2", 00:19:30.969 "uuid": "17a0fb66-df30-5c79-895a-b8a84ab12dab", 00:19:30.969 "is_configured": true, 00:19:30.969 "data_offset": 256, 00:19:30.969 "data_size": 7936 00:19:30.969 } 00:19:30.969 ] 00:19:30.969 }' 00:19:30.969 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89279 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89279 ']' 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89279 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89279 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89279' 00:19:31.230 killing process with pid 89279 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89279 00:19:31.230 Received shutdown signal, test time was about 60.000000 seconds 00:19:31.230 00:19:31.230 Latency(us) 00:19:31.230 [2024-11-04T11:51:56.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.230 [2024-11-04T11:51:56.752Z] =================================================================================================================== 00:19:31.230 [2024-11-04T11:51:56.752Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.230 [2024-11-04 11:51:56.586916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.230 [2024-11-04 11:51:56.587132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.230 11:51:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89279 00:19:31.230 [2024-11-04 11:51:56.587218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.230 [2024-11-04 11:51:56.587242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:31.489 [2024-11-04 11:51:56.910872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.868 ************************************ 00:19:32.868 END TEST raid_rebuild_test_sb_md_interleaved 00:19:32.868 ************************************ 00:19:32.868 11:51:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:32.868 00:19:32.868 real 0m17.667s 00:19:32.868 user 0m23.177s 00:19:32.868 sys 0m1.632s 00:19:32.868 11:51:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.868 11:51:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.868 11:51:58 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:32.868 11:51:58 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:32.868 11:51:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89279 ']' 00:19:32.868 11:51:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89279 00:19:32.868 11:51:58 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:32.868 ************************************ 00:19:32.868 END TEST bdev_raid 00:19:32.868 ************************************ 00:19:32.868 00:19:32.868 real 12m9.765s 00:19:32.868 user 16m28.581s 00:19:32.868 sys 1m51.952s 00:19:32.868 11:51:58 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.869 11:51:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.869 11:51:58 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:32.869 11:51:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:32.869 11:51:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.869 11:51:58 -- common/autotest_common.sh@10 -- # set +x 00:19:32.869 ************************************ 00:19:32.869 START TEST spdkcli_raid 00:19:32.869 ************************************ 00:19:32.869 11:51:58 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:32.869 * Looking for test storage... 00:19:32.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:32.869 11:51:58 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.869 11:51:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.869 11:51:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.128 11:51:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.128 --rc genhtml_branch_coverage=1 00:19:33.128 --rc genhtml_function_coverage=1 00:19:33.128 --rc genhtml_legend=1 00:19:33.128 --rc geninfo_all_blocks=1 00:19:33.128 --rc geninfo_unexecuted_blocks=1 00:19:33.128 00:19:33.128 ' 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.128 --rc genhtml_branch_coverage=1 00:19:33.128 --rc genhtml_function_coverage=1 00:19:33.128 --rc genhtml_legend=1 00:19:33.128 --rc geninfo_all_blocks=1 00:19:33.128 --rc geninfo_unexecuted_blocks=1 00:19:33.128 00:19:33.128 ' 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.128 --rc genhtml_branch_coverage=1 00:19:33.128 --rc genhtml_function_coverage=1 00:19:33.128 --rc genhtml_legend=1 00:19:33.128 --rc geninfo_all_blocks=1 00:19:33.128 --rc geninfo_unexecuted_blocks=1 00:19:33.128 00:19:33.128 ' 00:19:33.128 11:51:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.128 --rc genhtml_branch_coverage=1 00:19:33.128 --rc genhtml_function_coverage=1 00:19:33.128 --rc genhtml_legend=1 00:19:33.128 --rc geninfo_all_blocks=1 00:19:33.128 --rc geninfo_unexecuted_blocks=1 00:19:33.128 00:19:33.128 ' 00:19:33.128 11:51:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:33.128 11:51:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:33.128 11:51:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:33.128 11:51:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:33.128 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:33.129 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:33.129 11:51:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89956 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:33.129 11:51:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89956 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 89956 ']' 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.129 11:51:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.129 [2024-11-04 11:51:58.553327] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:33.129 [2024-11-04 11:51:58.553582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89956 ] 00:19:33.387 [2024-11-04 11:51:58.732878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:33.387 [2024-11-04 11:51:58.858937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.387 [2024-11-04 11:51:58.858971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.323 11:51:59 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.323 11:51:59 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:19:34.323 11:51:59 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:34.323 11:51:59 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.323 11:51:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.324 11:51:59 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:34.324 11:51:59 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.324 11:51:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.324 11:51:59 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:34.324 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:34.324 ' 00:19:36.237 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:36.237 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:36.237 11:52:01 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:36.237 11:52:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.237 11:52:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.237 11:52:01 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:36.237 11:52:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.237 11:52:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.237 11:52:01 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:36.237 ' 00:19:37.175 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:37.434 11:52:02 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:37.434 11:52:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.434 11:52:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.434 11:52:02 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:37.434 11:52:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.434 11:52:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.434 11:52:02 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:37.434 11:52:02 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:38.003 11:52:03 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:38.003 11:52:03 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:38.003 11:52:03 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:38.003 11:52:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.003 11:52:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.003 11:52:03 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:38.003 11:52:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.003 11:52:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.003 11:52:03 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:38.003 ' 00:19:38.940 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:39.198 11:52:04 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:39.198 11:52:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.198 11:52:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.198 11:52:04 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:39.198 11:52:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:39.198 11:52:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.198 11:52:04 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:39.198 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:39.198 ' 00:19:40.578 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:40.578 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:40.837 11:52:06 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.837 11:52:06 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89956 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89956 ']' 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89956 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89956 00:19:40.837 killing process with pid 89956 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89956' 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 89956 00:19:40.837 11:52:06 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 89956 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89956 ']' 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89956 00:19:43.422 11:52:08 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 89956 ']' 00:19:43.422 11:52:08 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 89956 00:19:43.422 Process with pid 89956 is not found 00:19:43.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (89956) - No such process 00:19:43.422 11:52:08 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 89956 is not found' 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:43.422 11:52:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:43.422 00:19:43.422 real 0m10.509s 00:19:43.422 user 0m21.834s 00:19:43.422 sys 0m1.145s 00:19:43.422 11:52:08 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.422 11:52:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.422 ************************************ 00:19:43.422 END TEST spdkcli_raid 00:19:43.422 ************************************ 00:19:43.422 11:52:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:43.422 11:52:08 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:43.422 11:52:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:43.422 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:19:43.422 ************************************ 00:19:43.422 START TEST blockdev_raid5f 00:19:43.422 ************************************ 00:19:43.422 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:43.422 * Looking for test storage... 00:19:43.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:43.422 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:43.422 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:19:43.422 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.683 11:52:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:43.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.683 --rc genhtml_branch_coverage=1 00:19:43.683 --rc genhtml_function_coverage=1 00:19:43.683 --rc genhtml_legend=1 00:19:43.683 --rc geninfo_all_blocks=1 00:19:43.683 --rc geninfo_unexecuted_blocks=1 00:19:43.683 00:19:43.683 ' 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:43.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.683 --rc genhtml_branch_coverage=1 00:19:43.683 --rc genhtml_function_coverage=1 00:19:43.683 --rc genhtml_legend=1 00:19:43.683 --rc geninfo_all_blocks=1 00:19:43.683 --rc geninfo_unexecuted_blocks=1 00:19:43.683 00:19:43.683 ' 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:43.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.683 --rc genhtml_branch_coverage=1 00:19:43.683 --rc genhtml_function_coverage=1 00:19:43.683 --rc genhtml_legend=1 00:19:43.683 --rc geninfo_all_blocks=1 00:19:43.683 --rc geninfo_unexecuted_blocks=1 00:19:43.683 00:19:43.683 ' 00:19:43.683 11:52:08 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:43.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.683 --rc genhtml_branch_coverage=1 00:19:43.683 --rc genhtml_function_coverage=1 00:19:43.683 --rc genhtml_legend=1 00:19:43.683 --rc geninfo_all_blocks=1 00:19:43.683 --rc geninfo_unexecuted_blocks=1 00:19:43.683 00:19:43.683 ' 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:43.683 11:52:08 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90249 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90249 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90249 ']' 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.683 11:52:09 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:43.683 11:52:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.683 [2024-11-04 11:52:09.111956] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:43.683 [2024-11-04 11:52:09.112204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90249 ] 00:19:43.972 [2024-11-04 11:52:09.275066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.972 [2024-11-04 11:52:09.402873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.909 11:52:10 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:44.909 11:52:10 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:19:44.909 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:44.909 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:44.909 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:44.909 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.909 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.909 Malloc0 00:19:44.909 Malloc1 00:19:45.168 Malloc2 00:19:45.168 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.168 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:45.168 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.168 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a6991ba3-5514-47a5-8a35-fc6991eaf137"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a6991ba3-5514-47a5-8a35-fc6991eaf137",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a6991ba3-5514-47a5-8a35-fc6991eaf137",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ab09a441-cef6-4d9a-aa64-096a9becadfe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "94bf4e50-73c2-442b-a9c0-35ba50502c03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "003414bb-2979-4132-9ca5-0b4ddaf0bad0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:45.169 11:52:10 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90249 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90249 ']' 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90249 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90249 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90249' 00:19:45.169 killing process with pid 90249 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90249 00:19:45.169 11:52:10 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90249 00:19:48.455 11:52:13 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:48.455 11:52:13 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:48.455 11:52:13 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:48.455 11:52:13 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:48.455 11:52:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:48.455 ************************************ 00:19:48.455 START TEST bdev_hello_world 00:19:48.455 ************************************ 00:19:48.455 11:52:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:48.455 [2024-11-04 11:52:13.687129] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:48.455 [2024-11-04 11:52:13.687281] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90316 ] 00:19:48.455 [2024-11-04 11:52:13.860061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.714 [2024-11-04 11:52:13.985150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.281 [2024-11-04 11:52:14.546899] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:49.281 [2024-11-04 11:52:14.546955] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:49.281 [2024-11-04 11:52:14.546972] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:49.281 [2024-11-04 11:52:14.547474] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:49.281 [2024-11-04 11:52:14.547673] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:49.281 [2024-11-04 11:52:14.547693] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:49.281 [2024-11-04 11:52:14.547754] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:49.281 00:19:49.281 [2024-11-04 11:52:14.547779] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:50.659 00:19:50.659 real 0m2.546s 00:19:50.659 user 0m2.189s 00:19:50.659 sys 0m0.234s 00:19:50.659 ************************************ 00:19:50.659 END TEST bdev_hello_world 00:19:50.659 ************************************ 00:19:50.659 11:52:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.659 11:52:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:50.918 11:52:16 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:50.919 11:52:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:50.919 11:52:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.919 11:52:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.919 ************************************ 00:19:50.919 START TEST bdev_bounds 00:19:50.919 ************************************ 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90367 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90367' 00:19:50.919 Process bdevio pid: 90367 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90367 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90367 ']' 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.919 11:52:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:50.919 [2024-11-04 11:52:16.321256] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:50.919 [2024-11-04 11:52:16.321388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90367 ] 00:19:51.178 [2024-11-04 11:52:16.503105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.178 [2024-11-04 11:52:16.639763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.178 [2024-11-04 11:52:16.639852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.178 [2024-11-04 11:52:16.639880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.787 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.787 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:19:51.787 11:52:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:52.046 I/O targets: 00:19:52.046 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:52.046 00:19:52.046 00:19:52.046 CUnit - A unit testing framework for C - Version 2.1-3 00:19:52.046 http://cunit.sourceforge.net/ 00:19:52.046 00:19:52.046 00:19:52.046 Suite: bdevio tests on: raid5f 00:19:52.046 Test: blockdev write read block ...passed 00:19:52.046 Test: blockdev write zeroes read block ...passed 00:19:52.046 Test: blockdev write zeroes read no split ...passed 00:19:52.046 Test: blockdev write zeroes read split ...passed 00:19:52.305 Test: blockdev write zeroes read split partial ...passed 00:19:52.305 Test: blockdev reset ...passed 00:19:52.305 Test: blockdev write read 8 blocks ...passed 00:19:52.305 Test: blockdev write read size > 128k ...passed 00:19:52.305 Test: blockdev write read invalid size ...passed 00:19:52.305 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:52.305 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:52.305 Test: blockdev write read max offset ...passed 00:19:52.305 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:52.305 Test: blockdev writev readv 8 blocks ...passed 00:19:52.305 Test: blockdev writev readv 30 x 1block ...passed 00:19:52.305 Test: blockdev writev readv block ...passed 00:19:52.305 Test: blockdev writev readv size > 128k ...passed 00:19:52.305 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:52.305 Test: blockdev comparev and writev ...passed 00:19:52.305 Test: blockdev nvme passthru rw ...passed 00:19:52.305 Test: blockdev nvme passthru vendor specific ...passed 00:19:52.305 Test: blockdev nvme admin passthru ...passed 00:19:52.305 Test: blockdev copy ...passed 00:19:52.305 00:19:52.305 Run Summary: Type Total Ran Passed Failed Inactive 00:19:52.305 suites 1 1 n/a 0 0 00:19:52.305 tests 23 23 23 0 0 00:19:52.305 asserts 130 130 130 0 n/a 00:19:52.305 00:19:52.305 Elapsed time = 0.689 seconds 00:19:52.305 0 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90367 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90367 ']' 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90367 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90367 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90367' 00:19:52.305 killing process with pid 90367 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90367 00:19:52.305 11:52:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90367 00:19:54.212 11:52:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:54.212 00:19:54.212 real 0m3.073s 00:19:54.212 user 0m7.660s 00:19:54.212 sys 0m0.404s 00:19:54.212 11:52:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.212 11:52:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 ************************************ 00:19:54.212 END TEST bdev_bounds 00:19:54.212 ************************************ 00:19:54.212 11:52:19 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:54.212 11:52:19 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:54.212 11:52:19 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:54.212 11:52:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 ************************************ 00:19:54.212 START TEST bdev_nbd 00:19:54.212 ************************************ 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90432 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90432 /var/tmp/spdk-nbd.sock 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90432 ']' 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:54.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.212 11:52:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 [2024-11-04 11:52:19.472770] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:19:54.212 [2024-11-04 11:52:19.473025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.212 [2024-11-04 11:52:19.654091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.477 [2024-11-04 11:52:19.780882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:55.045 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:55.302 1+0 records in 00:19:55.302 1+0 records out 00:19:55.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682531 s, 6.0 MB/s 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:55.302 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:55.560 { 00:19:55.560 "nbd_device": "/dev/nbd0", 00:19:55.560 "bdev_name": "raid5f" 00:19:55.560 } 00:19:55.560 ]' 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:55.560 { 00:19:55.560 "nbd_device": "/dev/nbd0", 00:19:55.560 "bdev_name": "raid5f" 00:19:55.560 } 00:19:55.560 ]' 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.560 11:52:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.818 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.076 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:56.334 /dev/nbd0 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.334 1+0 records in 00:19:56.334 1+0 records out 00:19:56.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341511 s, 12.0 MB/s 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.334 11:52:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:56.592 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:56.592 { 00:19:56.592 "nbd_device": "/dev/nbd0", 00:19:56.592 "bdev_name": "raid5f" 00:19:56.592 } 00:19:56.592 ]' 00:19:56.592 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:56.592 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:56.592 { 00:19:56.592 "nbd_device": "/dev/nbd0", 00:19:56.592 "bdev_name": "raid5f" 00:19:56.592 } 00:19:56.592 ]' 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:56.851 256+0 records in 00:19:56.851 256+0 records out 00:19:56.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621138 s, 169 MB/s 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:56.851 256+0 records in 00:19:56.851 256+0 records out 00:19:56.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0372217 s, 28.2 MB/s 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.851 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.109 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:57.367 11:52:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:57.625 malloc_lvol_verify 00:19:57.625 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:57.884 67e4fb07-63a8-436c-8eeb-d4153761d602 00:19:57.884 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:58.142 f0a5289d-0cb9-425e-8866-fec549e839ed 00:19:58.400 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:58.658 /dev/nbd0 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:58.658 mke2fs 1.47.0 (5-Feb-2023) 00:19:58.658 Discarding device blocks: 0/4096 done 00:19:58.658 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:58.658 00:19:58.658 Allocating group tables: 0/1 done 00:19:58.658 Writing inode tables: 0/1 done 00:19:58.658 Creating journal (1024 blocks): done 00:19:58.658 Writing superblocks and filesystem accounting information: 0/1 done 00:19:58.658 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.658 11:52:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90432 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90432 ']' 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90432 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90432 00:19:58.927 killing process with pid 90432 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90432' 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90432 00:19:58.927 11:52:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90432 00:20:00.835 ************************************ 00:20:00.835 END TEST bdev_nbd 00:20:00.835 ************************************ 00:20:00.835 11:52:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:00.835 00:20:00.835 real 0m6.658s 00:20:00.835 user 0m9.270s 00:20:00.835 sys 0m1.345s 00:20:00.835 11:52:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.835 11:52:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 11:52:26 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:00.835 11:52:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:00.835 11:52:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:00.835 11:52:26 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:00.835 11:52:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:00.835 11:52:26 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.835 11:52:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 ************************************ 00:20:00.835 START TEST bdev_fio 00:20:00.835 ************************************ 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:00.835 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:00.835 ************************************ 00:20:00.835 START TEST bdev_fio_rw_verify 00:20:00.835 ************************************ 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.835 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:00.836 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:00.836 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:20:00.836 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:00.836 11:52:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:01.095 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:01.095 fio-3.35 00:20:01.095 Starting 1 thread 00:20:13.337 00:20:13.337 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90645: Mon Nov 4 11:52:37 2024 00:20:13.337 read: IOPS=8243, BW=32.2MiB/s (33.8MB/s)(322MiB/10001msec) 00:20:13.337 slat (usec): min=24, max=112, avg=28.76, stdev= 2.26 00:20:13.337 clat (usec): min=14, max=481, avg=192.31, stdev=68.07 00:20:13.337 lat (usec): min=43, max=509, avg=221.07, stdev=68.27 00:20:13.337 clat percentiles (usec): 00:20:13.337 | 50.000th=[ 190], 99.000th=[ 306], 99.900th=[ 343], 99.990th=[ 404], 00:20:13.337 | 99.999th=[ 482] 00:20:13.337 write: IOPS=8644, BW=33.8MiB/s (35.4MB/s)(333MiB/9866msec); 0 zone resets 00:20:13.337 slat (usec): min=11, max=292, avg=25.27, stdev= 5.43 00:20:13.337 clat (usec): min=86, max=773, avg=442.61, stdev=55.51 00:20:13.337 lat (usec): min=111, max=910, avg=467.88, stdev=56.30 00:20:13.337 clat percentiles (usec): 00:20:13.337 | 50.000th=[ 449], 99.000th=[ 553], 99.900th=[ 668], 99.990th=[ 725], 00:20:13.337 | 99.999th=[ 775] 00:20:13.337 bw ( KiB/s): min=30936, max=37392, per=99.02%, avg=34238.32, stdev=1268.91, samples=19 00:20:13.337 iops : min= 7734, max= 9348, avg=8559.58, stdev=317.23, samples=19 00:20:13.337 lat (usec) : 20=0.01%, 100=6.01%, 250=30.90%, 500=55.72%, 750=7.36% 00:20:13.337 lat (usec) : 1000=0.01% 00:20:13.338 cpu : usr=98.69%, sys=0.50%, ctx=30, majf=0, minf=7194 00:20:13.338 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.338 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.338 issued rwts: total=82439,85285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:13.338 00:20:13.338 Run status group 0 (all jobs): 00:20:13.338 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=322MiB (338MB), run=10001-10001msec 00:20:13.338 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=333MiB (349MB), run=9866-9866msec 00:20:13.904 ----------------------------------------------------- 00:20:13.904 Suppressions used: 00:20:13.904 count bytes template 00:20:13.904 1 7 /usr/src/fio/parse.c 00:20:13.904 303 29088 /usr/src/fio/iolog.c 00:20:13.904 1 8 libtcmalloc_minimal.so 00:20:13.904 1 904 libcrypto.so 00:20:13.904 ----------------------------------------------------- 00:20:13.904 00:20:14.163 00:20:14.163 real 0m13.263s 00:20:14.163 user 0m13.688s 00:20:14.163 sys 0m0.601s 00:20:14.163 ************************************ 00:20:14.163 END TEST bdev_fio_rw_verify 00:20:14.163 ************************************ 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a6991ba3-5514-47a5-8a35-fc6991eaf137"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a6991ba3-5514-47a5-8a35-fc6991eaf137",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a6991ba3-5514-47a5-8a35-fc6991eaf137",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ab09a441-cef6-4d9a-aa64-096a9becadfe",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "94bf4e50-73c2-442b-a9c0-35ba50502c03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "003414bb-2979-4132-9ca5-0b4ddaf0bad0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:14.163 /home/vagrant/spdk_repo/spdk 00:20:14.163 ************************************ 00:20:14.163 END TEST bdev_fio 00:20:14.163 ************************************ 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:14.163 00:20:14.163 real 0m13.460s 00:20:14.163 user 0m13.787s 00:20:14.163 sys 0m0.682s 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.163 11:52:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 11:52:39 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:14.163 11:52:39 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:14.163 11:52:39 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:14.163 11:52:39 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.163 11:52:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 ************************************ 00:20:14.163 START TEST bdev_verify 00:20:14.163 ************************************ 00:20:14.163 11:52:39 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:14.421 [2024-11-04 11:52:39.691606] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:20:14.421 [2024-11-04 11:52:39.691919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90809 ] 00:20:14.421 [2024-11-04 11:52:39.862250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:14.678 [2024-11-04 11:52:40.004312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.678 [2024-11-04 11:52:40.004322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.292 Running I/O for 5 seconds... 00:20:17.172 11748.00 IOPS, 45.89 MiB/s [2024-11-04T11:52:44.067Z] 11798.50 IOPS, 46.09 MiB/s [2024-11-04T11:52:44.643Z] 11185.67 IOPS, 43.69 MiB/s [2024-11-04T11:52:46.019Z] 10875.25 IOPS, 42.48 MiB/s [2024-11-04T11:52:46.019Z] 10734.20 IOPS, 41.93 MiB/s 00:20:20.497 Latency(us) 00:20:20.497 [2024-11-04T11:52:46.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.497 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:20.497 Verification LBA range: start 0x0 length 0x2000 00:20:20.497 raid5f : 5.02 5358.33 20.93 0.00 0.00 35634.24 380.98 28274.92 00:20:20.497 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.497 Verification LBA range: start 0x2000 length 0x2000 00:20:20.497 raid5f : 5.02 5380.23 21.02 0.00 0.00 35815.58 264.72 28274.92 00:20:20.497 [2024-11-04T11:52:46.019Z] =================================================================================================================== 00:20:20.497 [2024-11-04T11:52:46.019Z] Total : 10738.57 41.95 0.00 0.00 35725.10 264.72 28274.92 00:20:21.878 00:20:21.878 real 0m7.777s 00:20:21.878 user 0m14.298s 00:20:21.878 sys 0m0.286s 00:20:21.878 11:52:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:21.878 ************************************ 00:20:21.878 END TEST bdev_verify 00:20:21.878 ************************************ 00:20:21.878 11:52:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:22.136 11:52:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:22.136 11:52:47 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:22.136 11:52:47 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:22.136 11:52:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:22.136 ************************************ 00:20:22.136 START TEST bdev_verify_big_io 00:20:22.136 ************************************ 00:20:22.136 11:52:47 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:22.136 [2024-11-04 11:52:47.510624] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:20:22.136 [2024-11-04 11:52:47.510765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90908 ] 00:20:22.395 [2024-11-04 11:52:47.689485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:22.395 [2024-11-04 11:52:47.849693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.395 [2024-11-04 11:52:47.849721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.333 Running I/O for 5 seconds... 00:20:25.206 505.00 IOPS, 31.56 MiB/s [2024-11-04T11:52:52.109Z] 569.00 IOPS, 35.56 MiB/s [2024-11-04T11:52:53.048Z] 633.33 IOPS, 39.58 MiB/s [2024-11-04T11:52:53.987Z] 697.00 IOPS, 43.56 MiB/s [2024-11-04T11:52:53.987Z] 723.40 IOPS, 45.21 MiB/s 00:20:28.465 Latency(us) 00:20:28.465 [2024-11-04T11:52:53.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.465 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:28.465 Verification LBA range: start 0x0 length 0x200 00:20:28.465 raid5f : 5.19 354.47 22.15 0.00 0.00 8683288.10 169.03 417598.83 00:20:28.465 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:28.465 Verification LBA range: start 0x200 length 0x200 00:20:28.465 raid5f : 5.11 347.91 21.74 0.00 0.00 8999157.76 220.90 423093.55 00:20:28.465 [2024-11-04T11:52:53.987Z] =================================================================================================================== 00:20:28.465 [2024-11-04T11:52:53.987Z] Total : 702.38 43.90 0.00 0.00 8838516.47 169.03 423093.55 00:20:29.841 00:20:29.841 real 0m7.882s 00:20:29.841 user 0m14.552s 00:20:29.841 sys 0m0.275s 00:20:29.841 11:52:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:29.841 ************************************ 00:20:29.841 END TEST bdev_verify_big_io 00:20:29.841 ************************************ 00:20:29.841 11:52:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:29.841 11:52:55 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.841 11:52:55 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:29.841 11:52:55 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.841 11:52:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:30.100 ************************************ 00:20:30.100 START TEST bdev_write_zeroes 00:20:30.100 ************************************ 00:20:30.100 11:52:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.100 [2024-11-04 11:52:55.458550] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:20:30.100 [2024-11-04 11:52:55.458701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91012 ] 00:20:30.359 [2024-11-04 11:52:55.634350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.359 [2024-11-04 11:52:55.760337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.927 Running I/O for 1 seconds... 00:20:31.862 22167.00 IOPS, 86.59 MiB/s 00:20:31.862 Latency(us) 00:20:31.862 [2024-11-04T11:52:57.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.862 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:31.862 raid5f : 1.01 22121.85 86.41 0.00 0.00 5764.28 1731.41 8356.56 00:20:31.862 [2024-11-04T11:52:57.384Z] =================================================================================================================== 00:20:31.862 [2024-11-04T11:52:57.384Z] Total : 22121.85 86.41 0.00 0.00 5764.28 1731.41 8356.56 00:20:33.278 ************************************ 00:20:33.278 00:20:33.278 real 0m3.423s 00:20:33.278 user 0m3.066s 00:20:33.278 sys 0m0.226s 00:20:33.278 11:52:58 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:33.278 11:52:58 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:33.278 END TEST bdev_write_zeroes 00:20:33.278 ************************************ 00:20:33.536 11:52:58 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:33.536 11:52:58 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:33.536 11:52:58 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:33.536 11:52:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:33.536 ************************************ 00:20:33.536 START TEST bdev_json_nonenclosed 00:20:33.536 ************************************ 00:20:33.536 11:52:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:33.536 [2024-11-04 11:52:58.944096] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:20:33.536 [2024-11-04 11:52:58.944218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91065 ] 00:20:33.795 [2024-11-04 11:52:59.117412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.795 [2024-11-04 11:52:59.234918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.795 [2024-11-04 11:52:59.235017] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:33.795 [2024-11-04 11:52:59.235045] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:33.795 [2024-11-04 11:52:59.235055] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:34.055 00:20:34.055 real 0m0.634s 00:20:34.055 user 0m0.408s 00:20:34.055 sys 0m0.121s 00:20:34.055 11:52:59 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:34.055 ************************************ 00:20:34.055 END TEST bdev_json_nonenclosed 00:20:34.055 ************************************ 00:20:34.055 11:52:59 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:34.055 11:52:59 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:34.055 11:52:59 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:34.055 11:52:59 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:34.055 11:52:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:34.055 ************************************ 00:20:34.055 START TEST bdev_json_nonarray 00:20:34.055 ************************************ 00:20:34.055 11:52:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:34.312 [2024-11-04 11:52:59.638606] Starting SPDK v25.01-pre git sha1 3edf9f121 / DPDK 24.03.0 initialization... 00:20:34.312 [2024-11-04 11:52:59.638724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91095 ] 00:20:34.312 [2024-11-04 11:52:59.814306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.571 [2024-11-04 11:52:59.928669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.571 [2024-11-04 11:52:59.928775] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:34.571 [2024-11-04 11:52:59.928793] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:34.571 [2024-11-04 11:52:59.928812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:34.829 00:20:34.829 real 0m0.640s 00:20:34.829 user 0m0.412s 00:20:34.829 sys 0m0.123s 00:20:34.829 ************************************ 00:20:34.829 END TEST bdev_json_nonarray 00:20:34.829 ************************************ 00:20:34.829 11:53:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:34.829 11:53:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:34.829 11:53:00 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:34.829 00:20:34.829 real 0m51.479s 00:20:34.829 user 1m10.553s 00:20:34.829 sys 0m4.689s 00:20:34.829 11:53:00 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:34.829 11:53:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:34.829 ************************************ 00:20:34.829 END TEST blockdev_raid5f 00:20:34.829 ************************************ 00:20:34.829 11:53:00 -- spdk/autotest.sh@194 -- # uname -s 00:20:34.829 11:53:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:34.829 11:53:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:34.829 11:53:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:34.829 11:53:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:34.829 11:53:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:34.829 11:53:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:34.829 11:53:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.829 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.087 11:53:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:35.087 11:53:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:35.087 11:53:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:35.087 11:53:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:35.087 11:53:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:35.087 11:53:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:35.087 11:53:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:35.087 11:53:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:35.087 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.087 11:53:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:35.087 11:53:00 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:20:35.087 11:53:00 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:20:35.087 11:53:00 -- common/autotest_common.sh@10 -- # set +x 00:20:36.986 INFO: APP EXITING 00:20:36.986 INFO: killing all VMs 00:20:36.986 INFO: killing vhost app 00:20:36.986 INFO: EXIT DONE 00:20:37.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.552 Waiting for block devices as requested 00:20:37.552 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.552 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.535 Cleaning 00:20:38.535 Removing: /var/run/dpdk/spdk0/config 00:20:38.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:38.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:38.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:38.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:38.535 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:38.535 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:38.535 Removing: /dev/shm/spdk_tgt_trace.pid57073 00:20:38.535 Removing: /var/run/dpdk/spdk0 00:20:38.535 Removing: /var/run/dpdk/spdk_pid56831 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57073 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57302 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57417 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57473 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57607 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57630 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57840 00:20:38.535 Removing: /var/run/dpdk/spdk_pid57946 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58058 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58181 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58289 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58327 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58365 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58441 00:20:38.535 Removing: /var/run/dpdk/spdk_pid58558 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59013 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59088 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59162 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59180 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59340 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59356 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59512 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59528 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59598 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59621 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59696 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59714 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59920 00:20:38.535 Removing: /var/run/dpdk/spdk_pid59951 00:20:38.535 Removing: /var/run/dpdk/spdk_pid60040 00:20:38.535 Removing: /var/run/dpdk/spdk_pid61410 00:20:38.535 Removing: /var/run/dpdk/spdk_pid61616 00:20:38.535 Removing: /var/run/dpdk/spdk_pid61762 00:20:38.535 Removing: /var/run/dpdk/spdk_pid62405 00:20:38.535 Removing: /var/run/dpdk/spdk_pid62622 00:20:38.795 Removing: /var/run/dpdk/spdk_pid62762 00:20:38.795 Removing: /var/run/dpdk/spdk_pid63411 00:20:38.795 Removing: /var/run/dpdk/spdk_pid63741 00:20:38.795 Removing: /var/run/dpdk/spdk_pid63881 00:20:38.795 Removing: /var/run/dpdk/spdk_pid65283 00:20:38.795 Removing: /var/run/dpdk/spdk_pid65536 00:20:38.795 Removing: /var/run/dpdk/spdk_pid65687 00:20:38.795 Removing: /var/run/dpdk/spdk_pid67083 00:20:38.795 Removing: /var/run/dpdk/spdk_pid67342 00:20:38.795 Removing: /var/run/dpdk/spdk_pid67482 00:20:38.795 Removing: /var/run/dpdk/spdk_pid68873 00:20:38.795 Removing: /var/run/dpdk/spdk_pid69319 00:20:38.795 Removing: /var/run/dpdk/spdk_pid69459 00:20:38.795 Removing: /var/run/dpdk/spdk_pid70958 00:20:38.795 Removing: /var/run/dpdk/spdk_pid71226 00:20:38.795 Removing: /var/run/dpdk/spdk_pid71368 00:20:38.795 Removing: /var/run/dpdk/spdk_pid72862 00:20:38.795 Removing: /var/run/dpdk/spdk_pid73126 00:20:38.795 Removing: /var/run/dpdk/spdk_pid73272 00:20:38.795 Removing: /var/run/dpdk/spdk_pid74758 00:20:38.795 Removing: /var/run/dpdk/spdk_pid75252 00:20:38.795 Removing: /var/run/dpdk/spdk_pid75397 00:20:38.795 Removing: /var/run/dpdk/spdk_pid75541 00:20:38.795 Removing: /var/run/dpdk/spdk_pid75959 00:20:38.795 Removing: /var/run/dpdk/spdk_pid76702 00:20:38.795 Removing: /var/run/dpdk/spdk_pid77097 00:20:38.795 Removing: /var/run/dpdk/spdk_pid77791 00:20:38.795 Removing: /var/run/dpdk/spdk_pid78238 00:20:38.795 Removing: /var/run/dpdk/spdk_pid78992 00:20:38.795 Removing: /var/run/dpdk/spdk_pid79401 00:20:38.795 Removing: /var/run/dpdk/spdk_pid81377 00:20:38.795 Removing: /var/run/dpdk/spdk_pid81815 00:20:38.795 Removing: /var/run/dpdk/spdk_pid82257 00:20:38.795 Removing: /var/run/dpdk/spdk_pid84350 00:20:38.795 Removing: /var/run/dpdk/spdk_pid84839 00:20:38.795 Removing: /var/run/dpdk/spdk_pid85361 00:20:38.795 Removing: /var/run/dpdk/spdk_pid86418 00:20:38.795 Removing: /var/run/dpdk/spdk_pid86741 00:20:38.795 Removing: /var/run/dpdk/spdk_pid87678 00:20:38.795 Removing: /var/run/dpdk/spdk_pid88005 00:20:38.795 Removing: /var/run/dpdk/spdk_pid88956 00:20:38.795 Removing: /var/run/dpdk/spdk_pid89279 00:20:38.795 Removing: /var/run/dpdk/spdk_pid89956 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90249 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90316 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90367 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90629 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90809 00:20:38.795 Removing: /var/run/dpdk/spdk_pid90908 00:20:38.795 Removing: /var/run/dpdk/spdk_pid91012 00:20:38.795 Removing: /var/run/dpdk/spdk_pid91065 00:20:38.795 Removing: /var/run/dpdk/spdk_pid91095 00:20:38.795 Clean 00:20:39.053 11:53:04 -- common/autotest_common.sh@1451 -- # return 0 00:20:39.053 11:53:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:39.053 11:53:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.053 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:20:39.053 11:53:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:39.053 11:53:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.053 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:20:39.053 11:53:04 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:39.053 11:53:04 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:39.053 11:53:04 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:39.053 11:53:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:39.053 11:53:04 -- spdk/autotest.sh@394 -- # hostname 00:20:39.053 11:53:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:39.312 geninfo: WARNING: invalid characters removed from testname! 00:21:05.876 11:53:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:06.444 11:53:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:08.978 11:53:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.513 11:53:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:13.419 11:53:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:16.701 11:53:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:18.650 11:53:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:18.650 11:53:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:18.650 11:53:44 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:18.650 11:53:44 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:18.650 11:53:44 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:18.650 11:53:44 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:18.650 + [[ -n 5429 ]] 00:21:18.650 + sudo kill 5429 00:21:18.916 [Pipeline] } 00:21:18.931 [Pipeline] // timeout 00:21:18.936 [Pipeline] } 00:21:18.949 [Pipeline] // stage 00:21:18.954 [Pipeline] } 00:21:18.967 [Pipeline] // catchError 00:21:18.975 [Pipeline] stage 00:21:18.977 [Pipeline] { (Stop VM) 00:21:18.988 [Pipeline] sh 00:21:19.266 + vagrant halt 00:21:22.551 ==> default: Halting domain... 00:21:30.689 [Pipeline] sh 00:21:30.969 + vagrant destroy -f 00:21:33.501 ==> default: Removing domain... 00:21:33.771 [Pipeline] sh 00:21:34.052 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:21:34.060 [Pipeline] } 00:21:34.076 [Pipeline] // stage 00:21:34.081 [Pipeline] } 00:21:34.095 [Pipeline] // dir 00:21:34.100 [Pipeline] } 00:21:34.114 [Pipeline] // wrap 00:21:34.119 [Pipeline] } 00:21:34.131 [Pipeline] // catchError 00:21:34.140 [Pipeline] stage 00:21:34.141 [Pipeline] { (Epilogue) 00:21:34.153 [Pipeline] sh 00:21:34.436 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:39.719 [Pipeline] catchError 00:21:39.720 [Pipeline] { 00:21:39.728 [Pipeline] sh 00:21:40.007 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:40.008 Artifacts sizes are good 00:21:40.017 [Pipeline] } 00:21:40.031 [Pipeline] // catchError 00:21:40.041 [Pipeline] archiveArtifacts 00:21:40.047 Archiving artifacts 00:21:40.146 [Pipeline] cleanWs 00:21:40.157 [WS-CLEANUP] Deleting project workspace... 00:21:40.157 [WS-CLEANUP] Deferred wipeout is used... 00:21:40.164 [WS-CLEANUP] done 00:21:40.166 [Pipeline] } 00:21:40.179 [Pipeline] // stage 00:21:40.184 [Pipeline] } 00:21:40.197 [Pipeline] // node 00:21:40.202 [Pipeline] End of Pipeline 00:21:40.240 Finished: SUCCESS